text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
At its simplest, a data center is a physical facility that organizations use to house their critical applications and data. A data center's design is based on a network of computing and storage resources that enable the delivery of shared applications and data. The key components of a data center design include routers, switches, firewalls, storage systems, servers, and application-delivery controllers.
Modern data centers are very different than they were just a short time ago. Infrastructure has shifted from traditional on-premises physical servers to virtual networks that support applications and workloads across pools of physical infrastructure and into a multicloud environment.
In this era, data exists and is connected across multiple data centers, the edge, and public and private clouds. The data center must be able to communicate across these multiple sites, both on-premises and in the cloud. Even the public cloud is a collection of data centers. When applications are hosted in the cloud, they are using data center resources from the cloud provider.
In the world of enterprise IT, data centers are designed to support business applications and activities that include:
Data center design includes routers, switches, firewalls, storage systems, servers, and application delivery controllers. Because these components store and manage business-critical data and applications, data center security is critical in data center design. Together, they provide:
Network infrastructure. This connects servers (physical and virtualized), data center services, storage, and external connectivity to end-user locations.
Storage infrastructure. Data is the fuel of the modern data center. Storage systems are used to hold this valuable commodity.
Computing resources. Applications are the engines of a data center. These servers provide the processing, memory, local storage, and network connectivity that drive applications.
Data center services are typically deployed to protect the performance and integrity of the core data center components.
Network security appliances. These include firewall and intrusion protection to safeguard the data center.
Application delivery assurance. To maintain application performance, these mechanisms provide application resiliency and availability via automatic failover and load balancing.
Data center components require significant infrastructure to support the center's hardware and software. These include power subsystems, uninterruptible power supplies (UPS), ventilation, cooling systems, fire suppression, backup generators, and connections to external networks.
The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942. It includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four categories of data center tiers rated for levels of redundancy and fault tolerance.
Tier 1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical events. It has single-capacity components and a single, nonredundant distribution path.
Tier 2: Redundant-capacity component site infrastructure. This data center offers improved protection against physical events. It has redundant-capacity components and a single, nonredundant distribution path.
Tier 3: Concurrently maintainable site infrastructure. This data center protects against virtually all physical events, providing redundant-capacity components and multiple independent distribution paths. Each component can be removed or replaced without disrupting services to end users.
Tier 4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault tolerance and redundancy. Redundant-capacity components and multiple independent distribution paths enable concurrent maintainability and one fault anywhere in the installation without causing downtime.
Many types of data centers and service models are available. Their classification depends on whether they are owned by one or many organizations, how they fit (if they fit) into the topology of other data centers, what technologies they use for computing and storage, and even their energy efficiency. There are four main types of data centers:
These are built, owned, and operated by companies and are optimized for their end users. Most often they are housed on the corporate campus.
These data centers are managed by a third party (or a managed services provider) on behalf of a company. The company leases the equipment and infrastructure instead of buying it.
In colocation ("colo") data centers, a company rents space within a data center owned by others and located off company premises. The colocation data center hosts the infrastructure: building, cooling, bandwidth, security, etc., while the company provides and manages the components, including servers, storage, and firewalls.
In this off-premises form of data center, data and applications are hosted by a cloud services provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM Cloud or other public cloud provider.
Discover more about data centers and what the future will bring to them and your network.
Computing infrastructure has experienced three macro waves of evolution over the last 65 years:
This evolution has given rise to distributed computing. This is where data and applications are distributed among disparate systems, connected and integrated by network services and interoperability standards to function as a single environment. It has meant the term data center is now used to refer to the department that has responsibility for these systems irrespective of where they are located.
Organizations can choose to build and maintain their own hybrid cloud data centers, lease space within colocation facilities (colos), consume shared compute and storage services, or use public cloud-based services. The net effect is that applications today no longer reside in just one place. They operate in multiple public and private clouds, managed offerings, and traditional environments. In this multicloud era, the data center has become vast and complex, geared to drive the ultimate user experience. | <urn:uuid:f14caa8d-6226-492e-bcdb-cfc516ce6eeb> | CC-MAIN-2022-40 | https://www.cisco.com/c/en_hk/solutions/data-center-virtualization/what-is-a-data-center.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00775.warc.gz | en | 0.920056 | 1,146 | 3.609375 | 4 |
Passwords, birth certificates, national insurance numbers and passports – as well as the various other means of authentication, that we have relied upon for the past century or more to prove who we are to others – can no longer be trusted in today’s digital age.
That’s because the mishandling of these types of personally identifiable information (PII) documents from birth, along with a string of major digital data breaches that have taken place in recent years, means that all of these items have potentially been compromised. Ultimately, they can no longer be considered ‘secret’ or protected.
In twenty years’ time our children and our grandchildren will likely find it completely bizarre when we tell them that we all had to remember countless complex passwords in order to access our daily work, financial and personal apps on our computers and phones.
That’s because passwords provide a truly awful user experience, not to mention the fact that they are terrible for security. Millions of passwords have been compromised or stolen in recent years following a string of high-profile data breaches, from the much-publicised Equifax hack of 2017 through to the recent MyFitnessPal hack in late March this year.
Hackers stole account information for over 150 million users from MyFitnessPal, which made it the fourth-biggest reported data breach of all time, after two massive Yahoo hacks in 2016 and the MySpace hack back in 2016.
The post-password zero-login age
The truth is clear. We are moving into a post-password zero-login age, with new biometric technologies and other PII innovations helping to secure a fast, easy, frictionless personalised experience for every single application we need to access on a daily basis.
Zero login is clearly an idea whose time has come, yet it requires a complete rethink and rebuild of our identity system. That’s because we have to develop completely new ways of identifying ourselves and others, that no longer rely on passwords or formerly ‘trusted’ documents such as those mentioned above.
MySpace, Yahoo, Equifax, MyFitnessPal – these massive data breaches all occurred in the last three years, yet they are only the tip of the iceberg. How many others went by without being reported? Or even noticed?
Biometric methods such as facial recognition and fingerprint scanners are becoming much more prevalent – particularly with the release of popular consumer devices such as the iPhone X last year – and these mark the beginnings of zero login age innovations.
Zero login essentially refers to the idea that we will never again have to recall complex passwords or provide documentation to identify ourselves. Our devices will be smart and secure enough to instantly recognise us by our features, our voice, and our movements and other unique identifiable traits.
So, beyond face and fingerprint recognition, we are already seeing innovation from companies such as Amazon, with the online retail giant trialling new ways to authenticate its customers based on typing speed, how hard they tap their phone’s screen and more.
Using these types of zero login technologies, the device is able to identify a user’s completely unique and intricate patterns of behaviour that no hacker could possibly recreate or ‘steal’.
Your device might also identify you from your other devices that are connected to it – your car, your Fitbit, your headphones and so on. Using all of this information, the user can be correctly authenticated by their biometrics in conjunction with their unique behavioural data.
Finally, with any authentication technique, the correct way to operate these is by having software running locally on your phone or other device, with a ‘risk score’ being sent to the cloud where a decision can be made on the likelihood of any nefarious behaviour by a potential cybercriminal.
That way, there is no information being sent across the internet on your locations, behaviours and biometrics, which would completely defeat the point of zero login security. We have to take care that the hype and excitement over our zero login future doesn’t preclude the user’s right to privacy, security and consent. | <urn:uuid:92cd1e6d-0bff-46cc-b172-3e1f1acfc1bd> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2018/07/17/zero-login/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00775.warc.gz | en | 0.947194 | 840 | 2.515625 | 3 |
“The Internet of Things is not a concept; it is a network, the true technology-enabled Network of all networks.” – Edewede Oriwoh
Increasing demand for high performance and enhanced computer devices with more advanced technologies has led to meet the requirement of internally connected and energy-efficient devices, also, when we think about a network of devices same as the system of internet-connected devices, it is considered to have a system of low power design at sensor levels as well as sensor networks.
From a published research paper, human body communication has been established to be the dynamic mode of communication, this communication uses low power wireless data communication technology, and internet of things is a way to connect all these sensors through a network, it makes normal sensors into smart devices. With so many such scopes, smart networks and devices can have a wide range of applications including smart health monitoring. In this blog, we will learn about how smart health monitoring can be conducted as part of the internet of things.
A network of IoT applications through common servers
It will have an unbeatable impact on our personal life, environment, and businesses. Many smart devices can be used directly on or close region to the human body that enables massive measurements of activities performed by the body, biopotentials, biomarkers and many other parameters. Smart devices generated data that can be used in personal assistant devices, healthcare devices, securities and defense applications.
In the recently released movie “War”, you must have seen small chips are used by filmmakers in the movie as a tracking sensor, defense applications, body treatment and many more. These chips are classic examples of IoT devices. Let’s learn about more sensor-network connected devices and their applications, benefits to the human body.
Consequences of the Internet of Things (IoT) to the Human Body
When the Internet of things connects to human bodies, it can be termed as the Internet of Bodies (IoB), IoB basically connects the human body with a spread-network through devices, these devices are ingested, inserted or linked to the body in some way. When connected, data can be conveyed from the body to a device and can be controlled and monitored remotely.
You can also learn about the role of the Internet of Things in Healthcare through this blog.
IoT devices are connected to the human body by following three ways;
First Externally, devices including Wearables such as smartwatches, Fitbits, etc can monitor our health externally,
Second Internally, many devices such as a pacemaker, digital pills, cochlear installation are implanted inside the body to monitor or direct various facets of human health, and
Third Embedded, when technology and the human body are combined together to have a real-time connection with a remotely operated- machine.
It is generally believed that most of the people in their old age are living without any medical assistance, due to which they suffer from many chronical or else continual diseases. If proper on-time assistance is provided to them, they could live a good and healthy balanced-life when they are in their old age. An IoB can be the most applied and efficient system for the present time, even though IoB has become an essential part of our life due to its capabilities and functionality.
IoT devices or you could say a combination of healthcare sensors, can be connected to the body according to the requirement of the patients. It can help to supervise the human body functions and environment around patients.
Examples of IoT devices used for human heath-monitoring
Many smart devices are frequently used as advanced applications to monitor human health, such as chronic diseases, continual diseases. In the most identified examples included. A few of the noted examples include;
Various IoT devices for human health
Pacemaker: A small device put up in the abdomen or chest of the patient, it helps in improving or controlling health conditions abnormal health swings with some electrical pulses.
Smart Pills: These pills are embedded with electronic sensors and chips. When a patient swallows pills, these pills collect the data from patients’ organs and send them to a remotely internet-connected device. For example, Gastroparesis is an unexplained stomach condition that can be assisted by smart pills. The first digital chemotherapy pill is the combination of sensors and networks, it stores and shares data with healthcare sources about various activities like heart rate, drug dosage and time.
Smart contact lenses: Depending upon the information given by eye and eye fluid, these lenses and monitor health diagnostics such as to monitor glucose levels, this helps dianetic patients’ to observe glucose levels in their body without replicating pin-pricks whole day.
RFID microchips: These chips help to identify and relocate when a person loses something so particular as used as defense application in the military. From the business point of view, these chips permit workers to have access to the building with key, to give payments by a wave of their hand, etc.
Hearables: These are new-era heading aid, it has made huge changes for the people who suffered from hearing loss interaction with others. They filter, balance and add required features according to real-world sounds. Doppler labs are a famous example of Hearables.
Challenges Faced by the Internet of Bodies
Typical examples where IoB can be used are early detection of any disease, its prevention, directing of disease, elderly assistance, and care after surgery, etc. In the healthcare system, IoT is mainly used to obtain rapid access to health conditions. We all know that IoT is the system of interconnected devices on a large scale, many health organizations exchange data and information with each other on a regular basis to address their problems and to enhance their performance.
This data is very important for companies who want to provide better health services to their customers. So on the basis of concern for patients’ data, major challenges and issues are following;
One of the most concerned issues with the Internet of Bodies(IoB) is the security of the devices and information they store and communicate. The security challenge faced by the Internet of Bodies technologies is similar to that what makes troubles which causes Internet of Things(IoT) usually, it becomes matter of life and death when IoB is implanted in the human body so as a consequence its privacy and security matters a lot.
Additionally, data privacy is another supreme challenge, who and why try to have access to confidential data, consider a device observes health diagnostics, it also can trace unhealthy and unexplained behaviors. So, does health insurance company pay or not the coverage when such reports are provided to show the patients’ unhealthy behavior, the data generated by IoB’s device? Another IoB’s device is used to restore the hearing ability of the patient, but what happens when it also stored audios in the patients’ environment. Data will no longer be private.
Data regulatory and legal conflicts should also need to be resolved. Proper policies need to introduce for the conventional application of the technologies.
Multiple technologies like Artificial Intelligence, Machine Learning, Deep Learning, are connected through some networks, and here the Internet of Things has a significant role for commercial applications as well.
Undoubtedly, IoB has changed the way by which facilities are provided to healthcare monitoring, these technologies develop the products that are responsible for producing a bigger impression by making collectively minor changes. IoT can help human-healthcare in many ways such as it lessens emergency room waiting time, improving drug management, tracing patients and staff, etc.
In this blog, we have seen multiple examples, benefits, and challenges faced by IoT when introduced in the human body or human-healthcare. With the data obtained regarding patients and their real-time conditions, it can be hoped to cure and prevent them from diseases regularly and on-time. It increasingly transforms the traditional ways of treatments and making patients’ problems easier. For more blogs in Analytics and new technologies do read Analytics Steps. | <urn:uuid:6382c87d-d6ba-4408-83db-269c578a7f65> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/how-internet-things-iot-influencing-human-body | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00775.warc.gz | en | 0.950577 | 1,666 | 3.09375 | 3 |
Introduced last 2016, power usage effectiveness (PUE) has been the de facto metric for assessing data center energy efficiency. It’s straightforward to calculate the ratio between total facility power and power consumed by the IT load. It also provides a single statistic that non-technical users can understand. Leading facilities have a PUE of less than 1.2 and are targeting to be carbon-neutral. This fact leads many to question if PUE is relevant as it was before. Maybe a new metric is needed such as Total-power Usage Effectiveness.
With Great PUE Comes Great Responsibility
PUE would be 1 in an ideal world; every kilowatt of energy entering the data center would be used solely to power the IT hardware. While achieving a perfect 1 is unlikely, being able to evaluate energy use fast and readily has sped up progress.
According to Uptime Institute, the worldwide average PUE has declined from 2.5 in 2007 to 1.5 today. Uptime also stated that the statistic has become ‘rusty’ after such a long period.
The Green Grid standard was issued in 2016 under ISO/IEC 30134-2:2016. PUE has been one of the most popular, easily understood, and hence extensively used metrics. Its simplicity has been crucial, and consumers use it as a gauge in various ways to determine if they are being charged cost-effectively for the energy their IT requires and how sustainable they are.
In a complicated sector, buyers have recognized PUE as a clear measure of energy efficiency, and it has become intimately tied to environmental credentials.
Engineering firm Cundall, whose data center clients include Facebook, say PUE may be starting to become a victim of its success after more than a decade.
It has proven to be a very useful tool for the industry. And it has caused large gains in energy efficiency in data centers, which is the role that it was initially meant for.
People may easily wrap their heads around it on the surface, but there are a lot of flaws beneath the surface.
PUE begins to lose its significance as we approach the boundaries of what traditional cooling can do and data center owners and operators seek carbon neutrality or even negative carbon emissions.
The EU-funded Boden Type data center in Sweden has a PUE of 1.018, whereas Huawei claims their modular data center product has a PUE of 1.11. According to Google, the global average for major facilities is 1.1, although it can be as low as 1.07.
Improvements in PUE become more difficult to measure as facilities grow more efficient, and gains become more incremental.
PUE is down to 1.0-whatever today; it needs more precise measurement methods. We’re focused on achieving these sustainability targets and net-zero, and we’re doing it with a blunt instrument: a statistic that doesn’t convey the impact of what’s going on.
PUE does not capture what happens at the rack level, as an example of its bluntness. IT power to the rack can run rack-level UPS or onboard fans; energy that might and should be on the balance sheet but isn’t.
The server and PSU cooling fans consume up to 10% of the electricity given to the IT equipment in a traditional air-cooled rack. Commonly known as “creative accounting,” some organizations can improve their PUE in this way.
Not all of that computing power is used for IT. And I believe that’s something that a lot of people overlook. Many model operators install UPS at the rack level by transferring the UPS load out of the infrastructure power and into the IT power. You’re just transferring it from one side of the line to the other.
PUE Became A Marketing Metric
PUE’s ease of use and widespread adoption in the business has led to it being utilized as a competitive advantage weapon as well as an internal improvement metric.
It’s been taken upon as a marketing technique throughout time. Operators are utilizing it to attract customers, and customers are complying by submitting RFPs with minimal PUE specifications.
Many operators will enthusiastically brag about their new facilities’ PUE and utilize it to entice environmentally aware customers.
PUE was never really intended for that. Operators should measure the PUE, apply modifications, and then measure it again to assess the effectiveness.
It’s taken on a significance and a profile within the industry that it was never meant to have, and for which it’s not suited.
Liquid Cooling And PUE
As air cooling becomes less effective, liquid cooling is becoming a more popular and viable option. However, as adoption grows, PUE’s usefulness as a metric diminishes.
Even the most energy-efficient fan motor will consume energy. With what we’ve been doing with air cooling, we’ve reached the limit of what’s possible within physics.
In a recent opinion piece, Uptime Institute warned that techniques like direct liquid cooling (DLC) drastically alter the profile of data center energy consumption “seriously undermining” PUE as a benchmarking tool and “spelling its obsolescence” as an efficiency metric.
While DLC technology has been a well-established but specialized technology for decades, some in the data center industry believe it is about to become more widely employed. Beyond just lowering the calculated PUE to reach the absolute limit, DLC reshapes the composition of energy usage of the facility and IT infrastructure.
Most DLC implementations achieve a partial PUE of 1.02 to 1.03 by lowering facility power. However, they also cut energy demands inside the rack by eliminating fans, which saves energy waste but worsens PUE.
PUE may be nearing the end of its usefulness in its current form. A probable lack of a relevant PUE statistic would signify a break in the historical pattern. Furthermore, competitive benchmarking would be obliterated. All DLC data centers will be extremely efficient, with minor energy variances.
Tracking IT use and a more granular method of measuring workload power consumption could quantify efficiency gains far better than future iterations of PUE.
Total-Power Usage Effectiveness (TUE)
Assessing a data center’s overall energy performance using Total-Power Usage Effectiveness (TUE) is a more useful metric. IT Power Usage Effectiveness (ITUE) x PUE is used to determine TUE. The impact of rack-level ancillary components such as server cooling fans, power supply units, and voltage regulators is taken into account by ITUE. ITUE is multiplied with PUE to get TUE a data center infrastructure value.
ITUE is a rack-level PUE that addresses what’s going on in a way that PUE alone doesn’t. It’s telling how much energy is getting to the rack, and how much of that energy is going to the electrical components.
It gives you a much better knowledge of what’s going on at that level, which is that you’ve either got some serving server fans spinning around, or you’ve got some dielectric pumps, or you’ve got something else that may be passive.
TUE and ITUE aren’t new. Dr. Michael Patterson of Intel Corporation, and the Energy Efficiency HPC Working Group (EEHPC WG) both proposed the two alternative metrics around a decade ago.
Total-Power Usage Effectiveness (TUE) uptake has been slower than PUE’s. The equations aren’t significantly more difficult to solve, but TUE necessitates a deeper grasp of the IT gear in use, which many colo operators will lack.
Cundall is starting to talk to its clients about Total-Power Usage Effectiveness (TUE) and moving away from PUE, partially to ensure that all of its projects are working toward net-zero carbon, but also to help customers accomplish their own sustainability goals.
More organizations may choose Total-Power Usage Effectiveness (TUE) as a way to better demonstrate their sustainable credentials in a setting where most facilities operate at PUEs in the low 1.1s as more corporations look to deploy liquid cooling, which goes beyond PUE.
Total-Power Usage Effectiveness Is Just A Step In The Journey
Even though Total-Power Usage Effectiveness (TUE) can provide a more detailed perspective of energy efficiency, it is still only one component of the whole sustainability package. There is no single indicator that can convey the full picture of a data center’s influence on sustainability.
Many major operators now use energy credits and power purchase agreements to ensure that their activities are powered entirely by renewable energy, or at the very least matched with energy that contributes to local networks.
Further, Google and other companies such as Aligned and T5 are releasing tools that can offer a more detailed breakdown of energy use by facility, providing a more realistic picture of renewable energy use hour by hour.
To demonstrate how little water their facilities use, companies are increasingly promoting their Water Usage Effectiveness (WUE) — a ratio that divides annual site water usage in liters by IT equipment energy usage in kilowatt-hours (kwh).
The Swiss Datacenter Efficiency Association (SDEA) developed the SDEA Label in 2020, to rate a facility’s efficiency and climate impact from start to finish, taking into account not only PUE but also the site’s total energy recycling capability and infrastructure utilization.
Despite competition from fresh upstarts, PUE appears to be the top dog in terms of sustainability measures for the time being.
Operating at industry-leading efficiency levels in Harlow and utilizing PUE to help p us monitor, measure, and optimize our clients’ energy footprints.
Despite being aware of ITUE and TUE, PUE is still the widely accepted metric that new and legacy data centers can use to evaluate their energy requirements, optimize their ITE environments, and decrease waste.
While one could argue that the industry has been overly focused on PUE, other measures will necessitate more data, parameters, and discussion between operators and customers to set new standards and encourage acceptance. Another factor to consider for our sector is that many legacy operators may not have the financial resources to go beyond reducing PUE levels. PUE remains the best solution till that level of granularity is available.
Intelligent Power Monitoring And Beyond
With AKCPro Server, the data center operator may reliably monitor, visualize, predict, and control the facility using the system model and real-time data integration with metering devices, data gathering, and archiving systems.
Energy usage monitoring gives real-time information on power consumption, carbon emissions, and temperature and humidity settings. This data is critical for the operator to discover chances for immediate power consumption reductions, energy-saving measures, and eventually lower OPEX expenses.
Install a wired or wireless Rack+ system in your data center to monitor environmental and electrical conditions at the cabinet-level. This can scale up to a whole Smart Data Center monitoring system thanks to AKCP’s system architecture and the multitude of sensors. AKCP Pro Server can also be used to centrally monitor mainline power monitoring, CRAC systems, and backup power.
Make adjustments to your data center environment, and instantly see the effect it has on PUE numbers. Run your data center at its optimum condition for cost savings and server health. | <urn:uuid:3edcd004-a60f-4fa4-bbd3-fe61850450b9> | CC-MAIN-2022-40 | https://www.akcp.com/blog/total-power-usage-effectiveness-next-gen-energy-efficiency-metrics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00775.warc.gz | en | 0.943019 | 2,444 | 2.734375 | 3 |
Digital transformation is changing IT, and the traditional ways of protecting and managing organizations’ data are no longer adequate.
With the adoption of cloud and containers, organizations are facing new challenges. A more distributed architecture increases complexity and data sprawl. Cyber attackers take advantage of organizations’ increased attack surface and seek out vulnerabilities to exploit and valuable data to steal. Lastly, with more data being created and stored outside the data center, organizations must comply with government regulations they might not have had to consider previously.
To overcome these hurdles, the next generation of data management will evolve into something that’s more secure, intelligent, and open than traditional data management.
For many organizations, digital transformation will lead to increased complexity as organizations’ IT infrastructure extends across the core data center, multiple public and private clouds and the edge. As data sets grow, organizations will have an increasingly difficult time keeping track of their data, managing the sea of products that make up their data infrastructure environment, deriving value from their data, and optimizing their storage. IDC research has consistently shown that enterprise storage systems continue to grow exponentially, expanding at a five-year CAGR of 30.9%. Without proper data management, organizations are at risk of their environments reaching an unsustainable level of sprawl.
Additionally, more distributed infrastructure exposes more vulnerability points for cyber attackers to exploit. Ransomware in the past was easily overcome with good backup and recovery because it simply sought to lock out data and make it unavailable. However, attacks have become more sophisticated in nature. The threat has evolved to not only try to delete backups, but to steal the data and threaten to release it.
Data exfiltration attacks take place before traditional data management touches the data — by the time backup systems make an immutable copy and store it in an untouchable environment, the criminals already have it. In the past, this would be regarded as a problem for security teams to deal with, but security operations and data operations need to work in tandem to properly defend against this threat.
As data management and protection tools have traditionally been closed systems, this makes it harder for security and IT to work together. This also means other adjacent workloads that can make use of the data such as analytics and AI have trouble accessing it. The closed-off nature of traditional data management leads to inefficiency and complexity that compounds further with distributed, continually expanding infrastructure.
The next generation of data management will break free of its traditional trappings to take on the data challenges of a more distributed, cloud-centric world.
Next-generation data management tools will need to have a centralized console to handle multiple use cases for multiple storage environments. A consolidated view of an organization’s entire data estate will help administrators make sense of their sprawling infrastructure. Additionally, being able to execute multiple IT functions such as backup, disaster recovery, migration, compliance and archiving from the same interface reduces complexity.
Traditionally, data protection focuses on pushing important assets from the core into the cloud, but that is poised to flip soon. IDC predicts that by 2025, 55% of organizations will have a cloud-centric approach to data protection, where assets in the cloud are safeguarded elsewhere. The next generation of data management will cater to this growing notion that most organizations’ critical data and applications will be born in the cloud.
The five pillars of the NIST Cyber Security Framework are Identify, Protect, Respond, Recover and Detect, the last of which is an area traditional data protection misses completely. The next generation of data management will address this by integrating security features and implementing measures at earlier stages of a cyberattack – not just after an attack has already occurred.
Current data management tools already use AI to identify what backup copies are safe to recover from. Next-generation data management will extend the use of AI further to perform surface area analyses and threat vulnerability assessments and make recommendations for boosting an organization’s security posture.
To carry this out, next-generation data management tools will need to integrate with technology partners and form a software ecosystem, allowing multiple products to programmatically communicate and share functionality. Data from data management tools will be able to feed machine learning and workflow applications, and organizations will be able to discover and implement these functions via an online marketplace.
IDC believes the new standard of successful data management is centralized, secure and open to integration with adjacent technologies. As organizations undergo digital transformation, data management will transform along with it to address the challenges that come with a more cloud-based, distributed digital estate. | <urn:uuid:b6d5b10c-2376-428c-a636-ca774ec2906d> | CC-MAIN-2022-40 | https://www.cohesity.com/jp/blogs/next-generation-data-management-evolves-to-face-new-challenges-and-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00175.warc.gz | en | 0.94358 | 922 | 2.640625 | 3 |
Receive the latest articles and
content straight to your mailbox
What Is EMC? What Is EMC Testing? Standards, Compliance, Certifications, & More
Every electronic device generates electromagnetic radiation. It’s a byproduct of the electric current passing through a circuit. This energy is on the low-frequency, non-ionizing end of the electromagnetic spectrum. Unlike the ionizing radiation generated by the sun, X-ray machines, and radioactive elements, non-ionizing radiation is not harmful to living cells.
However, an electronic device's electromagnetic fields (EMFs) can harm other devices around it. That’s why devices must meet specific regulatory requirements for electromagnetic compatibility (EMC).
What Is EMC?
EMC stands for “electromagnetic compatibility.” It examines a device’s ability to 1. function as expected in its electromagnetic environment and 2. not affect the other equipment in the environment’s ability to operate correctly.
In other words, EMC examines how well a device functions in the presence of electromagnetic energy and how much electromagnetic energy it produces that could potentially harm other devices in the environment. Electromagnetic compatibility can be achieved by limiting the unintentional creation and reception of potentially harmful electromagnetic energy (like EMI).
Difference Between EMI and EMC
“EMI” and “EMC” are often used interchangeably in device testing. Although related, they describe different things. EMI stands for “electromagnetic interference.” It describes electromagnetic energy that interferes with electronic devices.
Radio frequency interference is among the most common sources of interference seen today. Radio frequency interference (RFI) occurs when the electromagnetic energy in question is on the radio frequency spectrum.
EMC seeks to mitigate the impact of EMI through effective shielding of susceptible equipment. Mitigating electromagnetic interference (EMI) leads to electromagnetic compatibility (EMC).
What Is EMC Testing, and Why Is It Important?
EMC testing determines how well a device can operate in the presence of electromagnetic energy in its environment and how much electromagnetic energy the device itself generates that could cause electromagnetic interference for other devices in its environment.
EMC testing focuses on two main issues. Emissions suppression and susceptibility hardening (mitigation).
EMC Emissions Testing
EMC emissions testing examines the amount of electromagnetic energy a device produces that could cause electromagnetic interference for other devices in its environment. It falls into two main categories.
- Radiated emissions - Measures the electromagnetic disturbance a device creates itself.
- Conducted emissions - Measures the amount of internal electromagnetic energy which can travel from this device via a conductor (typically wires) that could cause EMI on other devices or systems in the environment.
EMC Immunity and Susceptibility Testing
EMC immunity and susceptibility testing examine how well a device functions in the presence of outside electromagnetic energy. Similar to emissions testing, immunity and susceptibility testing fall into two main categories.
- Radiated immunity/susceptibility - Measures how well a device will perform when exposed to electromagnetic energy it will encounter in its common environment.
- Conducted immunity/susceptibility - Measures how a piece of equipment responds to electromagnetic energy generated from another source but is conducted (typically along a cable) to the device under test.
EMC testing labs and operators also consider coupling. This is the mechanism or pathway through which emitted interference reaches the impacted device.
EMC testing is essential because the generation of electromagnetic energy is ubiquitous throughout modern electronic devices. Any device that uses digital techniques and has timing pulses greater than 9000 cycles per second is a source of unintentional radiation. That includes every device with a microprocessor. Intentional radiation is generated by alarm systems, cordless phones, remote controls, and other devices that transmit radio signals.
EMC Compliance, Standards, and Certification
Virtually every country requires the testing and certification of devices to ensure they meet EMC standards. In the U.S., those standards are regulated by the Federal Communications Commission (FCC) according to FCC Rules and Regulations Title 47, Part 15.
EMC Class A vs. EMC Class B
Regulated devices generally fall into two categories. Devices for commercial or industrial use fall into Class A, and devices for consumer use fall into Class B. Class B devices have more stringent limitations on EMC than Class A devices. There are exemptions for appliances, automobiles, industrial, medical and scientific equipment, and a few other categories of devices.
United States vs. International EMC Standards
Regulated products sold in the U.S must be tested according to the procedures outlined in ANSI Standard C63.4. The objective is to ensure that they meet standards for RF emissions in the 9kHz to 40GHz frequency range. The rules may require a declaration of conformity, verification, or certification, depending on the type of device. The FCC regulates only emissions, not susceptibility/immunity of devices under test.
The E.U. regulates both emissions and susceptibility/immunity according to IEC 61000 standards. EMC Directive 2004/108/EC states that equipment must be tested for compliance and labeled accordingly. Many other countries require compliance with either the FCC or E.U. standards, although some countries have developed their own similar regulations. In some cases, testing must be performed by a certified lab.
How Enconnex Can Help with EMC and EMI Testing
The Enconnex DevShield and DevRack lines of real device testing racks and cabinets are designed to facilitate testing under various conditions. While they won’t provide the measurements needed for formal EMC testing, verification, and certification, they are useful tools for determining how devices perform under real-world conditions.
What Are DevShield and DevRack?
DevShield cabinets are constructed from aerospace-grade shielded aluminum with copper-nickel gaskets shielding all the seams and penetrations of the cabinet for maximum protection. They provide 85dB of attenuation for frequencies ranging from 1MHz to 10 GHz (our 5G version provides 75dB at 1MHz-40GHz). With DevShield, product testing teams can place a device in the cabinet to see if it impacts the performance of other common devices within the cabinet. DevRack products are unshielded. They allow testing teams to see how a device performs in the presence of everyday EMI.
Both lines of products are designed to make real device testing more efficient. They enable real device cloud configurations and provide a better alternative vs. emulators and simulators. Hundreds of devices can be housed within the footprint of a traditional data center rack, with pull-out shelves for easy device access. Powerful integrated fans help dissipate heat, while cable management solutions, wireless access point mounting brackets, and other accessories keep the environment neat and organized.
What is DefenseShield?
Our DevShield product was inspired by our DefenseShield product. DefenseShield offers the same level of shielding and the same form but is designed for cybersecurity applications. It uses its EMI shielding and associated EMC environment to create a secure environment where data is protected from threats such as side-channel attacks where hackers decode RF signals emanating from equipment.
Whether you need to protect your information from threats such as side-channel attacks or test devices for EMC, Enconnex is your source. Contact our experienced team for expert guidance on setting up your lab environment.
Posted by Alex Zhang on July 26, 2022
Alex has 10+ years of experience working in the data center and material science industries. He currently serves as product manager at Enconnex for our real device testing and RF shielded product lines. Previously, he managed our sheet metal products. He has his MSEE degree from Northwest Polytechnic University and holds numerous professional certifications. | <urn:uuid:c69c71b9-5bc8-4f38-94c4-d2e1977119db> | CC-MAIN-2022-40 | https://blog.enconnex.com/what-is-emc-and-emc-testing-certifications-compliance-standards | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00175.warc.gz | en | 0.919044 | 1,609 | 3.65625 | 4 |
You probably haven’t heard of Kar-Wing Lau, but your children and grandchildren may have him to thank for saving millions (if not billions) of metric tons of carbon dioxide from entering the atmosphere.
It takes a lot of electricity to power a data center. In fact, according to the US Department of Energy, data centers account for roughly 2% of energy consumption in the US, and just over 1% of energy consumption globally. Unfortunately, a substantial portion of this electricity is generated from burning fossil fuels. According to SuperMicro’s 2021 Data Centers and the Environment report, data centers are responsible for 2% of the world’s greenhouse gas emissions.
One of the biggest factors in data center energy consumption is air cooling. Servers generate a lot of heat, and if they get too hot, they stop working. Data centers invest a lot of money and resources towards keeping servers cool enough to function, including devoting space in the data center to housing and supporting chilled water or mechanical air cooling equipment. These latter systems are essentially large air conditioning units that continuously blow cold air near the servers. As you can imagine, this process is inefficient and, as a result, costly.
There are other methods that rely on outside air – with or without adiabatic support systems – to cool data centers. While more environmentally friendly than traditional air cooling, these methods have limitations of their own. Outside air can contain harmful contaminants and particulates along with being subject to rapid swings in humidity. And systems using adiabatic pre-cooling consume massive volumes of water to help reject the heat. What’s more, not all data centers can be situated in exceptionally cool or exceptionally dry climates.
Explosive new demands for data streaming brought on by the COVID-19 pandemic, combined with new data-intense technologies such as artificial intelligence (AI), IoT, 5G, high performance computing and cryptocurrency mining, require cooling capabilities beyond the limits of air and other cooling methods. In all these cases, it’s not just a question of reducing carbon emissions – it’s a practical challenge in terms of cost, space, and capacity.
A Technological and Ecological Turning Point
In 2012, Kar-Wing Lau had a radical idea. He and his engineering team would build the most energy-efficient data center in Hong Kong. The brutally hot and humid climate, high energy costs, spatial constraints imposed by a densely populated city, and the immense data processing needs of the company meant that traditional server cooling technologies would not be sustainable.
Out of necessity and ingenuity, Kar-Wing and 3M Corporation pioneered a groundbreaking new cooling technology that could address and overcome all of these challenges. Enter 2-phase liquid immersion cooling, which, as the name implies, involves immersing servers in a fluid, allowing the heat to be boiled away.
Other forms of immersion cooling had been around for a while. But the practical implementation of so-called “single-phase immersion” cooling has several drawbacks. The coolant, for example, must be pumped through an additional heat exchanger before rejecting heat to a water cooling circuit. 2-phase immersion cooling is self-contained. It’s also the case that single-phase immersion often employs mineral oil as the cooling fluid. This makes cleaning off IT equipment for servicing a major operation.
2-phase immersion cooling represents an alternative solution to these single-phase systems. This process relies on a non-toxic, environmentally safe dielectric fluid that, unlike oils, doesn’t require cleanup of IT hardware taken out for maintenance. Heat from the IT hardware causes the fluid to boil, transforming it from a liquid to a gas. This is called a phase change, and is phase 1 of the process. The benefit of this process is that once a fluid reaches its boiling point, it will not get any hotter. From then on, all of the heat energy goes into driving the phase change.
Phase 2 commences when the gas vapor turns back into a liquid thanks to a condensing coil positioned above the surface of the fluid where the vapor accumulates. The condensed fluid falls back into the tank as droplets, where the cycle begins again. No additional coolant pumps are required because the process is self-contained, and the fluid itself rarely has to be replaced. As a plus, if IT gear is removed from the DataTanks(tm), the fluid evaporates quickly and cleanly, making the IT equipment very easy to service. While this is a concern with regard to potential fluid loss, LiquidStack has developed patented technology to drastically minimize this risk.
Using this liquid cooling technology, Lau and his team built a powerful 500kW data center in Hong Kong. Despite Hong Kong’s hot and humid climate, the data center used nearly 95% less energy for cooling than comparable data centers, saving a staggering $64,000 on electricity costs per month. Additionally, the IT equipment in the data center used 10 times less space than traditional data centers. The entire system was located in an urban high-rise building and fits into the size of a standard shipping container.
“This data center project demonstrates the elegance of 2-phase immersion cooling and showcases that it has what it takes to be the new gold standard in the industry,” Lau said when the data center first launched. “Many of the companies that we partner with see the immediate benefits – technically and economically – and that it’s commercially viable.”
“There’s no denying,” he added, “that 2-phase immersion cooling has great potential for growth and will play an important role in the future.”
2-Phase Immersion Cooling Will Be The Standard
The massive savings in energy costs, space, and carbon emissions offered by 2-phase immersion cooling will have an impact across all industries. By eliminating the need for bulky heat sinks and space devoted to fans and air conditioning, it is already revolutionizing the design of data centers, high-performance computers, and edge-computing units. And by drastically reducing the amount of energy essentially wasted on cooling in data centers and elsewhere, it will shrink the carbon footprint of our entire digital infrastructure. | <urn:uuid:bf7e4ff4-1aa0-4a33-b723-126609b99684> | CC-MAIN-2022-40 | https://liquidstack.com/blog/how-one-man-with-a-vision-changed-data-center-cooling-forever | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00175.warc.gz | en | 0.935961 | 1,277 | 3.421875 | 3 |
Autumn is a special time of year. The leaves change color and fall from the trees in preparation for the cold winter months. Children head back to school and football season kicks off. But perhaps the biggest indicator of the fall season is that everywhere you go and nearly everything you purchase is pumpkin. From pumpkin spice coffees and treats to pumpkin bread and ice cream. All that is in addition to the crates upon crates of pumpkins for Halloween decorations and jack-o-lantern carving that are found in local grocery markets and pharmacies.
The annual demand for pumpkins is enormous. According to the U.S Department of Agriculture (USDA), every state in America produces pumpkins – with nine states (Illinois, Pennsylvania, New York, Indiana, Michigan, California, Ohio, North Carolina, and Texas) producing 75 percent of the market’s total crop yield. In 2020, greater than 66,000 acres of pumpkins were harvested resulting in more than 2 billion pounds of pumpkins. So, how do U.S. farmers meet this demand and ensure plentiful crops are grown, harvested, and delivered to stores across the U.S. by September 1st each year?
IoT Sensors and Pumpkins
The Internet of Things (IoT) consists of billions of physical devices connected to the internet which constantly share and collect data. IoT-enabled sensors help to monitor and efficiently track the growth and delivery of pumpkins during their entire lifecycle. From the soil in which they’re planted to the truck during transport and also to the store shelves where they are sold each September and October.
Currently, there are a few technology solutions that help ensure regular connectivity for IoT sensors. In some cases, higher bandwidth applications that require larger amounts of data may leverage cellular networks such as Wi-Fi and BLE. These are viable options for mission-critical or video-rich applications, but limited in the range and usage beyond near, land-based deployments.
Another option is Low Power, Wide Area Networks (LPWAN) which delivers massive amounts of small data sets across a long range. LPWAN can transmit loads of data with very limited bandwidth. This is ideal for solutions that are long-range and remote deployments. The same is true for connectivity in sea- and sky-based implementations.
LPWAN is reliable and secure and delivers pin-point accuracy in tracking assets across the entire supply chain and product lifecycle. This ability to know at any instance the health of your product, where it may be in transit, and/or the unintended error in the delivery process is powerful. This real-time accuracy and insight help organizations optimize their workflows and reduce operational costs, while also mitigating a potential disaster or product loss before it occurs.
Tracking & Monitoring
For pumpkins, IoT-enabled sensors operating on the LoRaWAN® standard are deployed across farms to track, monitor, and alert the soil levels of the produce. To ensure the pumpkins grow on schedule and meet the autumn demand, farmers closely track the water levels of the soil. If there is too much water, it may adversely impact the crop and too little may do the same. The most optimal growth environment ensures the biggest crop yield. When the sensors indicate that the pumpkins are ideal for harvesting, each is loaded into crates and moved onto geo-located trucks that are moved across the country.
In instances that call for the consumption of the pumpkins, that harvest may be transferred in climate-controlled trucks. These trucks maintain a steady temperature – regardless of the outside climate – to preserve the food and limit unwanted spoilage. Trucks that carry the pumpkins are tracked during the journey from the fields to the grocery stores maintaining safe temperature levels throughout the trip. Sensors inside the climate-controlled cargo can remotely alert operators if/when the temperature changes or there is an error reading in the cargo. This allows the operators to get in immediate contact with the driver and adjust the temperature or fix the cargo hold.
When the pumpkins make it to the final delivery spot – sensors help track their position inside the store. For example, sensors can monitor the shelves that feature pumpkins and pumpkin-flavored consumer goods. For example, IoT sensors can notify store employees when the stock is low and requires re-shelving. This is especially important when the timeframe in which pumpkins are sold is limited to only about 60 days. Additionally, insights gleaned from the position of the pumpkins within the store and how well they are selling (or not), directly impacts how the store associates may restock the produce somewhere else within the store to increase sales. It is the real-time analytics that helps store employees, help farmers sell their products and goods annually.
Thank the Sensors for Your Pumpkins!
Looking ahead to the fall season, when you make the pumpkin-themed purchase in your home, feel free to thank the sensors that help enable the products to efficiently and securely get to your home. Most Americans will have at least some portion of that two billion pounds of pumpkin harvest in their home and IoT-enabled sensors, running on the LoRaWAN standard, helped make that happen. Now, feel free to head out for a pumpkin spice latte. | <urn:uuid:f1753572-15fb-49a6-965f-529910e3f9d0> | CC-MAIN-2022-40 | https://www.iotforall.com/iot-sensors-help-deliver-pumpkins | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00175.warc.gz | en | 0.943024 | 1,075 | 2.71875 | 3 |
Every October, National Cybersecurity Awareness Month (NCSAM) promotes the importance of cybersecurity and helps available resources be safer and more secure online.
This year’s NCSAM theme is “Own It. Secure It. Protect It.” The theme emphasizes the role each individual person plays in online safety and the importance of taking proactive steps to enhance cybersecurity both at home and in the workplace.
Over the next few weeks, we’ll explore each of these three core components — beginning with “Secure It.”
When It Comes to Securing Your Digital Profile, MFA Is Table Stakes
Cybercriminals are very good at getting personal and sensitive information from unsuspecting victims. As technology evolves, their methods have become increasingly targeted.
Technology users have a responsibility to protect against cyber threats by learning about the available security features on the devices and in the software that you use. It’s also critical to utilize multiple types of authentication — not just a password — to protect your devices and online services such as bank accounts, email and social media accounts.
Typically, this begins with implementing multi-factor authentication (MFA), a security mechanism in which individuals must present two pieces of identity verification when logging into an account. In most cases, MFA includes a password and some kind of authentication on the user’s mobile device — SMS is the most common. Using MFA means that even if a cyber attacker manages to figure out your super strong password, they still would not be able to gain access without the other piece of authentication.
According to Microsoft research, utilizing MFA makes your accounts 99.9 percent less likely to be compromised. It’s an important cybersecurity best practice that doesn’t take much effort to implement. Unfortunately, however, motivated cyber attackers are always discovering potential work-arounds. Recent headlines reveal that while MFA is a critical step, it is also a target.
Overcoming Blind Spots in MFA with Biometric Technology
Earlier this month, the FBI issued a warning stating that it has “observed cyber actors circumventing multi-factor authentication through common social engineering and technical attacks.”
As more people implement MFA as an additional layer of cybersecurity protection, attackers are turning their attention and efforts toward exploiting blind spots inherent in MFA. Consider that MFA typically relies on “something you know,” (e.g., a security question) and “something you have,” (e.g., a laptop or smartphone). Here’s the problem — nothing about either of these methods actually confirms identity. Something you have can be stolen and something you know can be learned.
The attack on Twitter CEO Jack Dorsey is an example of how attackers can exploit these MFA blind spots. In late August, Dorsey’s personal Twitter handle was compromised. Nearly two-dozen offensive tweets and re-tweets were posted before the content was removed. The social media attack was just one in a series of Twitter account takeovers targeting a string of celebrities and social media influencers.
The attack method, known as a “SIM swap,” is accomplished when an attacker either convinces or bribes a mobile carrier employee to switch the number associated with a SIM card to another mobile device. According to the New York Times, these switches often cost as little as $100 for each phone number.
After the switch is made, the attacker can intercept any MFA codes sent by text message and essentially take control of the user’s entire phone number. This allows the attacker to gain access to everything from a person’s social media to banking, email and even cryptocurrency accounts. Some attackers are even using this SIM swap method to target and compromise high-profile politicians, causing reputation damage and spreading misinformation.
So how can you protect yourself? The FBI is urging individuals and organizations to continue using MFA and also to take security one step further by adding biometric authentication. This will make it harder for attackers to trick users into disclosing MFA codes or use technical interception to create them.
Biometric authentication uses biometric identifiers such as fingerprints, iris scans and voice patterns, for identification and access control purposes. These biometric identifiers are completley unique to the individual. As an individual, you can easily activate biometric functionalities already available on your devices and use them for authentication purposes.
There is no silver bullet for cybersecurity — it takes a strong mix of technology and security best practices to protect yourself and your organization in the digital era. The FBI’s guidelines around MFA and common-sense biometrics applications are spot on — as both are integral components of a multi-layered security approach. (This is sometimes referred to as Zero Trust for enterprise organizations).
Don’t wait until it’s too late. Take action this Cybersecurity Awareness Month and “Secure It” by applying these important layers of security to your devices and online accounts. | <urn:uuid:3d6c006a-f1e4-416d-ad2b-df2da79e99ea> | CC-MAIN-2022-40 | https://www.cyberark.com/resources/blog/mfa-and-biometric-authentication-secure-the-digital-profile | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00376.warc.gz | en | 0.93372 | 1,022 | 3.046875 | 3 |
Wireless Monitoring: Risks, BenefitsSizing Up 'Medical Body Area Networks'
The FCC recently set aside broadband spectrum for wireless patient monitoring systems. How should the industry respond to the security risks? Medical device expert Dale Nordenberg, M.D., offers insight.
The wireless systems are known as Medical Body Area Networks, which use sensors attached to patients to monitor vital signs. The sensors, linked to a hub, transmit data to other information systems, Nordenberg says in an interview with Information Security Media Group's Howard Anderson [transcript below].
Nordenberg, who's founder of the Medical Device Innovation, Safety and Security Consortium, says an assessment from the consortium doesn't reveal any increased risk from a security perspective. "What we see is that the same risks that people have been talking about would continue to persist, and what we're recommending is that we still need to work closely together to better understand the security risks for malware and for hacking that are currently associated with wireless devices."
To better understand the risks and to ensure the necessary security protocols are in place around these Medical Body Area Networks, the consortium is developing a conceptual framework for security that spans a medical device's entire lifecycle.
Organizations, Nordenberg says, need to create organizational structures that allow IT and engineering groups to work together when wireless device networks are implemented.
Also, organizations need to be "effectively monitoring for malware in their environment because any increase in malware in their environment would clearly increase their likelihood that a medical device or a Body Area Network would be potentially impacted," Nordenberg explains.
In the interview, Nordenberg also discusses:
- Consequences of Medical Body Area Networks;
- Security issues to be mindful of;
- Steps organizations should be taking to thwart malware and other risks.
In addition to his role leading the consortium, Nordenberg, a pediatrician, is CEO of Novasano Health and Science, a consulting firm that focuses on leveraging the strategic application of information resources. He formerly was a managing director in the healthcare practice at PricewaterhouseCoopers. And from 2002 through 2007, he held various positions at the Centers for Disease Control and Prevention, including associate director and CIO at the National Center for Infectious Diseases and senior adviser for strategic planning in the CDC's office of the CIO.
Medical Device Consortium
HOWARD ANDERSON: Why don't you describe the consortium and its mission for us briefly?
DALE NORDENBERG: The Medical Device Innovation, Safety and Security Consortium was founded nearly two years ago by leading healthcare delivery organizations because of the concerns that they had about medical devices and their vulnerability to malware and hacking, and the potential patient impact. We're a public/private partnership. In addition to the healthcare delivery organizations, we have members from the medical device manufacturing community and the broader technology industry community, as well as important government agencies that we work closely with.
Overall, we have three key goals. The first goal is to build this robust public/private partnership with all the medical device stakeholders. The second goal is to better define the security risks related to medical devices digitally enabled and networkable medical devices, both wired and wireless, so that we can be data driven in the way we assess the risk and the way we intervene. And the third goal area is to work collaboratively to develop short, medium and long-term strategies and tactics to mitigate the risk associated with security vulnerabilities and medical devices.
Medical Body Area Networks
ANDERSON: The Federal Communications Commission recently voted to set aside protected broadband spectrum for wireless medical devices known as Medical Body Area Networks. Can you please describe these wireless patient monitoring systems and address the significance of this decision?
NORDENBERG: The Medical Body Area Networks is really an evolutionary technology that's the result of the increasing leverage of technology for the monitoring and delivery of therapeutic modalities to a patient. The Medical Body Area Network really consists of one or more sensors that are primarily attached to a patient's body that then communicates through a hub that can then take the data from these sensors, aggregate these data and then the hub has the capability to then transmit these data to other systems within a hospital or other potential vendor for example that would then use this data to provide to physicians or providers to deliver care.
The Medical Body Area Networks are important in that they extend the healthcare system's capability to monitor patients beyond a hospital setting. For example, the wireless medical telemetry systems that have traditionally been deployed in hospitals and the rules for which from the FCC were established over ten years ago, I think roughly in the year 2000 - those systems are distinct from Medical Body Area Networks in that they are designed to be built within the hospital, where as the Medical Body Area Networks are really now expanding the ability and are dealing with devices that are able to be used beyond hospital facility walls.
ANDERSON: These networks can be used in a doctor's office, a nursing home, or even a patient's home in addition to a hospital, is that right?
NORDENBERG: That's absolutely correct. And there are going to be interesting consequences associated with that, so that for example the ability to literally monitor a patient 24/7 will result in the collection of a tremendous amount of data, whereas we monitored before in a much more intermittent fashion, which meant that the way we established our understanding of physiology and body metrics, if you will, which were acquired on this intermittent basis, is going to now give us perhaps an interesting view and perhaps some challenges in terms of the way healthcare professionals interpret body-related biometrics, because we will have much more data over a much longer period of time. That might mean that we will see things that we never really saw before. In the coming couple of years, from an epidemiologic and from a clinical perspective, this 24/7 monitoring and the massive amount of data that it will present to us may give us interesting windows into physiologic and patho-physiologic processes that we didn't have in shorter windows of telemetry if you will.
Also, we believe - from a healthcare system and from a clinical perspective - the ability to monitor patients beyond the walls of a hospital really could represent major advancements in the way we can better treat patients, get perhaps to quicker diagnoses, be quicker to deliver the right interventions and do this all in a more cost-effective way. All of the stakeholders in the healthcare ecosystem really believe that there's tremendous potential with regard to this new technology.
ANDERSON: What security issues are raised by using these wireless monitors that you just described, the Medical Body Area Networks, and does the FCC's action to set aside spectrum for the devices have an impact on security?
NORDENBERG: From our current assessment, the deployment of Body Area Networks and specifically the allocation of dedicated spectrum looks like a very positive action. Firstly, everybody recognizes that spectrum is a limited commodity. The ability to dedicate spectrum to the Medical Body Area Networks represents a mechanism that could provide optimal functioning for these devices, in that it allows the FCC and other industries and other partners to better coordinate and manage the spectrum so that it would minimize the opportunity for interference between devices. From what we're seeing right now and from our current assessment, we don't see that this dedication of spectrum represents any increased risk from a security perspective. What we see is that the same risks that people have been talking about would continue to persist, and what we're recommending is that we still need to work closely together to better understand the security risks for malware and for hacking that are currently associated with wireless devices.
When we talk about these risks, we're really talking about the entire information supply chain, and in this case would involve not just the sensors and the way the sensors communicate with the hubs of the Medical Body Area Networks, but also how the data would move from these hubs to the other data aggregating systems within a healthcare ecosystem.
Steps to Thwart Malware
ANDERSON: As hospitals, clinics and nursing homes and others take advantage of the new spectrum dedicated to Medical Body Area Networks, what steps should they be taking to thwart malware threats and make sure hackers don't access these new Medical Body Area Networks devices?
NORDENBERG: Taken holistically, we have to look at the entire ecosystem. The way our consortium is approaching this is we're developing a conceptual framework for security that really spans the whole life cycle of a medical device. What we mean by that is we need to not only look at what's happening within the organization, but also what's happening in the technology and the medical device manufacturing arena as well, because our ability to mitigate these vulnerabilities really rests on all of our shoulders, all the participants in this ecosystem.
If you're looking at the organization, if you're looking at the healthcare delivery organization, from that perspective what they can do is create organizational structures that really help the healthcare IT groups in the HDO, or the healthcare delivery organization, and the biomedical engineering groups. Have those two communities working very closely together so that when new devices are purchased or new medical device networks are implemented, from a wireless perspective [and] from a Body Area Network perspective, these two groups - the biomedical engineering group and the IT group - are working really closely together, because we see one of the major risks being that these two groups can often work in silos and therefore present IT and specifically security risks.
The other is to help ensure that the HDO is really very effectively monitoring for malware in their environment because any increase in malware in their environment would clearly increase their likelihood that a medical device or a Body Area Network would be potentially impacted and similarly looking at best practices for preventing hacking. Frankly, we see that the unintentional consequences of malware and its potential adverse impact on a medical device are probabilistically much more important than hacking. Hacking does not scale very well. Even if a device is hackable and it's been clear evidence that one or more medical devices are vulnerable to hacking, it would be a big effort to hack many devices simultaneously.
However, we clearly recognize the ability of malware on a very large scale to potentially disrupt more than one device and whole networks. So monitoring and doing robust surveillance is really critical from a malware perspective, and then understanding the fact that malware innovation is so robust that the healthcare delivery organization needs to be equally robust in its vigilance and its technologies that it deploys to monitor malware. From a consortium perspective what we're trying to do is work across many different healthcare delivery organizations to leverage the community to do the surveillance more effectively and to keep up innovation in surveillance and mitigation to try to keep pace with the innovation in the malware community.
From the perspective of the manufacturer, what our consortium is working on is to really help manufacturers understand what the vulnerabilities are and what the challenges are for the implementation, not of a single device or a single type of a device, but the healthcare delivery organization, which is the patient care entity, has the challenge of actually managing security and safety across literally thousands or tens of thousands of devices and how they interoperate with each other, and this is a very different risk profile than a single device.
What we're doing is we're leveraging our healthcare delivery organization leadership group to actually deliver a series of recommendations that will help manufacturers understand the challenges in the healthcare delivery organization, and to hopefully give them a roadmap for developing security features that will make the healthcare delivery organization environment a much safer one. In the short term, we can work at the organizational level and surveillance level, but really the medium and long-term objective is to help industry better understand the challenges so that the devices evolve and the coming design cycles [are] much safer.
ANDERSON: Are the security issues involved in Medical Body Area Networks dramatically different than the security involved in other wireless medical devices?
NORDENBERG: No. We believe that the security risks are essentially the same. In fact, the technologies that will be used in the Medical Body Area Networks are very similar essentially to the wireless technologies used in our current wireless telemetry networks. The good news is that from an innovation perspective that means that the medical device community can rapidly innovate in the area of Medical Body Area Networks. And also, it means that we for the most part believe that we understand the security risks because we're familiar with them in the current wireless medical world. Briefly, no we don't believe there's any increased risk and in fact we believe that we will be leveraging common technologies and we'll be able to address the security risks based on what we know already. | <urn:uuid:4b5ec3ed-3643-4bc8-85a6-ef0de2d8a5e6> | CC-MAIN-2022-40 | https://www.healthcareinfosecurity.com/interviews/wireless-monitoring-risks-benefits-i-1578 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00376.warc.gz | en | 0.958933 | 2,564 | 2.671875 | 3 |
NASA and the U.S. Geological Survey will jointly launch a new land monitoring satellite on Sept. 16. to support natural resource management.
The Landsat 9 spacecraft will operate alongside Landsat 8 to capture images of the land and help stakeholders manage water, crops, forests and other natural resources, the space agency said Tuesday.
The new satellite will also replace the existing Landsat 7, which launched to space in 1999. Landsat satellites have been monitoring Earth's land surface since 1972.
Orbital ATK, now part of Northrop Grumman, built Landsat 9 under a $129.9 million contract with NASA. The agency's Goddard Space Flight Center provided a thermal sensor instrument for the satellite and will manage the Landsat 9 mission.
A United Launch Alliance-made Atlas V 401 rocket will lift Landsat 9 from Vandenberg Space Force Base to space. | <urn:uuid:ed0553c7-0868-490a-8bcf-133303254355> | CC-MAIN-2022-40 | https://executivegov.com/2021/07/nasa-u-s-geological-survey-to-launch-landsat-9-in-september/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00376.warc.gz | en | 0.832413 | 178 | 3.453125 | 3 |
The new school year is an exciting and busy time for students, faculty, and staff of all ages. Because it tends to also be a busy time filled with new experiences and faces It is also a prime time for hackers, identity thieves, and other bad actors to take advantage.
The most important thing to remember is, that you are a target… we all are! Bad actors are not only after your data but often your device access or even just your credentials to gain access to a network or accounts you or your device connects to. Kids and young adults are even more likely to fall for scams. So, no matter what age a user is, if they are old enough to use a computer, they are old enough to be educated about the dangers of fishing and other online threats. Personal and financial well-being as well as the reputation of the institution or organization you associate with should be an important concern to anyone using a device that connects to the internet.
HERE ARE THE TOP 10 TIPS TO KEEP YOURSELF SECURE
- Install antivirus security software and keep all your software up to date.
2. Always think twice before clicking on links or opening attachments, even if they look like they’re from someone you know. Avoid clickbait, online quizzes, tabloid headers, ‘free’ offers, or unsolicited ads. If you’re not sure, contact the sender by a method you know is legitimate to confirm they sent it. Also, be careful what you download.
- Download only from reputable sites and sources.
- Scan files for viruses before downloading them.
- Pay attention to the file extensions.
3. Beware of phishing! Don’t give out your personal information. Remember, con artists, know how to fake their identity. The best way to avoid scams is to approach all unexpected messages, offers, and phone calls with healthy skepticism. (Watch Phishing Training). Watch out for typical beginning-of-the-year scams.
- Email scams, containing “important information about your school account,” or a “problem with your registration”.
- Student-targeted scams are designed to cheat you out of money, such as scholarship scams, fake “tuition payment processors”, textbook rental or book-buying scams, housing scams, tutoring scams, and work-from-home scams. (see “Tuition payment processor” scams – from 2016 but still current: https://www.forbes.com/sites/johnwasik/2016/09/11/scam-alert-avoid-college-payment-processors/ or Scholarship scams: http://www.fraud.org/back_to_school_scams(link is external)(end of the article)
- “Tech support” scams where you get a call supposedly from “the Service Desk” or even “Microsoft” or “Apple” telling you there’s a problem with your computer.
- IRS scams, demanding that students or their parents wire money immediately to pay a fake “federal student tax”. Messages asking for your login information, no matter how legitimate they may look. No one other than you need to know your passwords. (see Info from the IRS about fake “federal student tax” (from 2016 but still current): https://www.irs.gov/uac/newsroom/irs-warns-of-latest-scam-variation-involving-bogus-federal-student-tax(link is external)
- Fake friend requests on social media or Fake Dropbox or Google Doc notices
4. Protect your passwords. Make them long and strong, never reveal them to anyone, and use different passwords for different accounts or consider using a secure encrypted password manager. (Watch Password Safety Training)
5. Use multi-factor authentication or two-factor authentication in addition to having that strong password. Setting up multi-factor authentication on each and every account and device adds an additional layer of protection. (Watch MFA – 2FA Training)
6. Use secure websites (Look for the HTTPS), secure Wi-Fi, and trusted devices when accessing personal data while browsing or doing online banking, or purchasing items. Free wi-fi networks are the most common sources of online security problems. Make sure your internet connection is secure by using a secure VPN connection.
7. Learn the dangers of social media and be cautious of what you’re sharing on social networks. Many of the most dangerous security offenses by people are things that they might not even think about as risky behavior. Also, remember your personal posts can affect the organizations you are associated with. The internet does not have a delete key. (Watch Online Safety Training)
8. Never leave devices unattended. Devices go missing in the blink of an eye. Most of us have a good amount of data on our devices that a bad actor would love to take advantage of. Also, don’t forget to Log off or lock your computer when stepping away when it’s in a secure location. Log out of online accounts when you are done using them. Clear Browsing History
9. Keep your privacy setting on! Web browsers and mobile operating systems have settings available to protect your privacy online.
10. Back up your data in multiple places and have a recovery plan.
It’s everyone’s job to ensure Cyber-Safety at school, work, or at home and training is the most effective way to detect and avoid cyberattacks.
From website data breaches, cyber-attacks, or malicious emails, people are the biggest cybersecurity vulnerability. A lack of cyber security knowledge is a costly mistake to make, but keeping yourself secure with the right training, can reduce the risk of falling victim to cybercrime.
If you have questions, contact us for more information about our services or schedule a consultation with one of our Security Experts. | <urn:uuid:393598ce-b715-4a30-8518-d8e50d11e83a> | CC-MAIN-2022-40 | https://bravertechnology.com/cybersecurity-tips-for-back-to-school/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00376.warc.gz | en | 0.922235 | 1,236 | 3.046875 | 3 |
Maxtrain.com - [email protected] - 513-322-8888 - 866-595-6863
In this course, the student will learn about the data engineering patterns and practices as it pertains to working with batch and real-time analytical solutions using Azure data platform technologies. Students will begin by understanding the core compute and storage technologies that are used to build an analytical solution. They will then explore how to design an analytical serving layers and focus on data engineering considerations for working with source files. The students will learn how to interactively explore data stored in files in a data lake. They will learn the various ingestion techniques that can be used to load data using the Apache Spark capability found in Azure Synapse Analytics or Azure Databricks, or how to ingest using Azure Data Factory or Azure Synapse pipelines. The students will also learn the various ways they can transform the data using the same technologies that is used to ingest data. The student will spend time on the course learning how to monitor and analyze the performance of analytical system so that they can optimize the performance of data loads, or queries that are issued against the systems. They will understand the importance of implementing security to ensure that the data is protected at rest or in transit. The student will then show how the data in an analytical system can be used to create dashboards or build predictive models in Azure Synapse Analytics.
After completing this course, students will be able to:
This module provides an overview of the Azure compute and storage technology options that are available to data engineers building analytical workloads. This module teaches ways to structure the data lake, and to optimize the files for exploration, streaming, and batch workloads. The student will learn how to organize the data lake into levels of data refinement as they transform files through batch and stream processing. Then they will learn how to create indexes on their datasets, such as CSV, JSON, and Parquet files, and use them for potential query and workload acceleration.
Lab: Explore Compute and Storage Options for Data Engineering Workloads
After completing this module, students will be able to:
This module teaches how to design and implement data stores in a modern data warehouse to optimize analytical workloads. The student will learn how to design a multidimensional schema to store fact and dimension data. Then the student will learn how to populate slowly changing dimensions through incremental data loading from Azure Data Factory.
Lab: Designing and Implementing the Serving Layer
This module explores data engineering considerations that are common when loading data into a modern data warehouse analytical from files stored in an Azure Data Lake and understanding the security consideration associated with storing files stored in the data lake.
Lab: Data Engineering Considerations
In this module, students will learn how to work with files stored in the data lake and external file sources, through T-SQL statements executed by a serverless SQL pool in Azure Synapse Analytics. Students will query Parquet files stored in a data lake, as well as CSV files stored in an external data store. Next, they will create Azure Active Directory security groups and enforce access to files in the data lake through Role-Based Access Control (RBAC) and Access Control Lists (ACLs).
Lab: Run Interactive Queries Using Serverless SQL Pools
This module teaches how to explore data stored in a data lake, transform the data, and load data into a relational data store. The student will explore Parquet and JSON files and use techniques to query and transform JSON files with hierarchical structures. Then the student will use Apache Spark to load data into the data warehouse and join Parquet data in the data lake with data in the dedicated SQL pool.
Lab: Explore, Transform, and Load Data into the Data Warehouse Using Apache Spark
This module teaches how to use various Apache Spark DataFrame methods to explore and transform data in Azure Databricks. The student will learn how to perform standard DataFrame methods to explore and transform data. They will also learn how to perform more advanced tasks, such as removing duplicate data, manipulate date/time values, rename columns, and aggregate data.
Lab: Data Exploration and Transformation in Azure Databricks
This module teaches students how to ingest data into the data warehouse through T-SQL scripts and Synapse Analytics integration pipelines. The student will learn how to load data into Synapse dedicated SQL pools with PolyBase and COPY using T-SQL. The student will also learn how to use workload management along with a Copy activity in a Azure Synapse pipeline for petabyte-scale data ingestion.
Lab: Ingest and Load Data into the Data Warehouse
This module teaches students how to build data integration pipelines to ingest from multiple data sources, transform data using mapping data flowss, and perform data movement into one or more data sinks.
Lab: Transform Data with Azure Data Factory or Azure Synapse Pipelines
In this module, you will learn how to create linked services, and orchestrate data movement and transformation using notebooks in Azure Synapse Pipelines.
Lab: Orchestrate Data Movement and Transformation in Azure Synapse Pipelines
In this module, students will learn strategies to optimize data storage and processing when using dedicated SQL pools in Azure Synapse Analytics. The student will know how to use developer features, such as windowing and HyperLogLog functions, use data loading best practices, and optimize and improve query performance.
Lab: Optimize Query Performance with Dedicated SQL Pools in Azure Synapse
In this module, students will learn how to analyze then optimize the data storage of the Azure Synapse dedicated SQL pools. The student will know techniques to understand table space usage and column store storage details. Next the student will know how to compare storage requirements between identical tables that use different data types. Finally, the student will observe the impact materialized views have when executed in place of complex queries and learn how to avoid extensive logging by optimizing delete operations.
Lab: Analyze and Optimize Data Warehouse Storage
In this module, students will learn how Azure Synapse Link enables seamless connectivity of an Azure Cosmos DB account to a Synapse workspace. The student will understand how to enable and configure Synapse link, then how to query the Azure Cosmos DB analytical store using Apache Spark and SQL serverless.
Lab: Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link
In this module, students will learn how to secure a Synapse Analytics workspace and its supporting infrastructure. The student will observe the SQL Active Directory Admin, manage IP firewall rules, manage secrets with Azure Key Vault and access those secrets through a Key Vault linked service and pipeline activities. The student will understand how to implement column-level security, row-level security, and dynamic data masking when using dedicated SQL pools.
Lab: End-To-End Security with Azure Synapse Analytics
In this module, students will learn how to process streaming data with Azure Stream Analytics. The student will ingest vehicle telemetry data into Event Hubs, then process that data in real time, using various windowing functions in Azure Stream Analytics. They will output the data to Azure Synapse Analytics. Finally, the student will learn how to scale the Stream Analytics job to increase throughput.
Lab: Real-Time Stream Processing with Stream Analytics
In this module, students will learn how to ingest and process streaming data at scale with Event Hubs and Spark Structured Streaming in Azure Databricks. The student will learn the key features and uses of Structured Streaming. The student will implement sliding windows to aggregate over chunks of data and apply watermarking to remove stale data. Finally, the student will connect to Event Hubs to read and write streams.
Lab: Create a Stream Processing Solution with Event Hubs and Azure Databricks
In this module, the student will learn how to integrate Power BI with their Synapse workspace to build reports in Power BI. The student will create a new data source and Power BI report in Synapse Studio. Then the student will learn how to improve query performance with materialized views and result-set caching. Finally, the student will explore the data lake with serverless SQL pools and create visualizations against that data in Power BI.
Lab: Build Reports Using Power BI Integration with Azure Synapse Analytics
This module explores the integrated, end-to-end Azure Machine Learning and Azure Cognitive Services experience in Azure Synapse Analytics. You will learn how to connect an Azure Synapse Analytics workspace to an Azure Machine Learning workspace using a Linked Service and then trigger an Automated ML experiment that uses data from a Spark table. You will also learn how to use trained models from Azure Machine Learning or Azure Cognitive Services to enrich data in a SQL pool table and then serve prediction results using Power BI.
Lab: Perform Integrated Machine Learning Processes in Azure Synapse Analytics
Successful students start this course with knowledge of cloud computing and core data concepts and professional experience with data solutions.
The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course data analysts and data scientists who work with analytical solutions built on Microsoft Azure.
4 Days Course | <urn:uuid:c5d07a37-804e-4567-a465-c47da10b2ff3> | CC-MAIN-2022-40 | https://maxtrain.com/product/data-engineering-on-microsoft-azure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00376.warc.gz | en | 0.86043 | 1,881 | 2.546875 | 3 |
As healthcare becomes a growing concern among many Americans, there is a potential to unlock new initiatives that help improve the well-being of many and possibly save lives. Big data has many uses, including identifying health issues as they appear, getting a better understanding of certain conditions and working out options for treatments. However, compared to other industries, the use of healthcare performance management that incorporates big data remains fairly low. Part of this may be due to hesitance on providers, but there is also an issue of bottlenecks that are commonplace in the industry. If predictive analytics is to make any headway in the field, it must address these bottlenecks.
A matter of regulation
There are various hold-ups that can prevent the effective development of a strong data stream at any touch point in healthcare, according to CIO.com. The most significant of these is government regulations regarding recordkeeping and patient privacy. Any solution will require compliance with HIPAA at the bare minimum, among other sets of laws often associated with the Affordable Care Act. There are also related factors such as the willingness of patients to actually provide this valuable data. While the rise of wearable technology enabled a greater degree of information to open up, many people still feel uncomfortable with the idea of giving away their health details, even when it's anonymous.
Another problem that exists in the field of healthcare is data interoperability. There are currently many different standards in play that make it difficult to pass along details to different core constituents, whether they are primary care providers or insurers. One way around this is to implement unique workarounds that often involve methods such as crowdsourcing. The Harvard Medical School demonstrated this in a partnership with a community of more than 450,000 algorithm specialists and developers to analyze key elements the immune system. It believes that this could apply to other areas of healthcare as well.
The most distinguishing issue at this point, however, remains governance of data within each individual institution. The management of insurers and healthcare providers often fail to send data even within their own organizations, essentially creating data silos. Consequently, the most severe bottleneck is the lack of coordination of data at the managerial level. A healthcare management system can help manage this problem, but only if the institutions using it can offer a disciplined line of support. That means greatly expanding collaboration of data across stakeholders and others. | <urn:uuid:bcb04be2-0e5c-46ed-b4d1-e17a9183e90f> | CC-MAIN-2022-40 | https://avianaglobal.com/what-are-the-bottlenecks-for-big-data-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00376.warc.gz | en | 0.96376 | 474 | 2.671875 | 3 |
A new study led by Duke University, has been looking into the ways we can use Big Data in the fight against species extinction. The wealth of information now available means conservationists and policy makers can more readily identify high-risk species, and implement change to save them before it’s too late.
“Online databases, smartphone apps, crowd-sourcing and new hardware devices are making it easier to collect data on species,” said Stuart L. Pimm, Doris Duke Professor of Conservation Ecology at Duke.
“When combined with data on land-use change and the species observations of millions of amateur citizen scientists, technology is increasingly allowing scientists and policymakers to more closely monitor the planet’s biodiversity and threats to it.”
Two databases which have been instrumental in their research are Red List of Threatened Species and Protected Planet. They’ve also gathered data from mapping tools such as SavingSpecies and biodiversitymapping.org. Pimm explains that the value of these tools is in allowing researchers to spot patterns and correlations, and hopefully enable conservation efforts to be more focused. “We now know that most land-based species have small geographical ranges — smaller than the U.S. state of Delaware — and are geographically concentrated. The same pattern seems to hold for marine life, according to new data we are reviewing. Species with small ranges are disproportionately likely to go extinct,” Pimm said. “This knowledge offers the hope that we can concentrate our conservation efforts on critical places around the planet.”
But even these resources are limited- the Red List, for instance, currently only lists information on 70,000 species, and investment is essential to help it gather data on more.
Of course, knowledge is only the first step. In order to do something about the extinction rates, this awareness has to be converted into action, and systemic changes into the way we approach our environment.
Interested in more content like this? Sign up to our newsletter, and you wont miss a thing! | <urn:uuid:899b94d2-36e2-4b14-ae3b-70dd88c0c747> | CC-MAIN-2022-40 | https://dataconomy.com/2014/06/using-big-data-fight-extinction/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00376.warc.gz | en | 0.934719 | 424 | 3.484375 | 3 |
What are APIs?
With everything moving to the cloud and millions of mobile devices coming online every year, the need for data centers has never been as acute. And making sure everything within a data center works seamlessly is critical.
That’s where APIs and Dev APIs come in—they’re the keys to building bespoke software tools and apps precisely for an operator’s needs, instead of using a subset of standard functionalities available in a network management suite.
APIs include all the required building blocks, so it’s easier to develop an application.
Some APIs were defined for development purposes to help create a wide scope of network management applications such as REpresentational State Transfer (REST), REST CONFiguration (RESTCONF), NETwork CONFiguration (NETCONF), OpenFlow, Google Remote Procedure Call (gRPC) and many others.
APIs are a set of routines, protocols, and tools for building software applications that specify how software components and services shall interact.
APIs provide the following advantages:
- Faster, easier, and multi-platform/multi-vendor application development: Network Element (NE) management and configurations aren’t bound to NE and vendor-specific command line interfaces. DevOps teams can create software applications that can interact with different NE types that operate on different network layers.
- Easy integration with IT tools: APIs are written in programming languages that are easy to integrate with programs, scripts, etc. Data models are also compatible with IT compute and storage building blocks.
- Efficient IT resource utilization: Development APIs like RESTCONF and NETCONF were built to be sessionless and very light on IT resources. Standard commands and procedures are sent to the NE, and a reply with requested data is sent back over short-lived interactivity sessions that do not require heavy IT resources. Just like when a user types the URL of a website in an Internet browser and the page loads after a short period of time, no other information is sent until the user clicks on a button or a link. Also, the use of APIs translates into capital expenditure savings by eliminating the use of servers and other compute devices between software applications and the network elements they manage.
APIs are a key component in modern Data Center Interconnect (DCI) platforms, and data center operators can leverage this new operational model by developing their own software applications specific to their own needs. Cloud-based application development portals, such as Ciena’s Emulation Cloud™, are now available to unlock the full potential of open APIs, which simplifies integration activities. Developers and IT teams can innovate and develop new tools and applications without investing in IT infrastructure or stressing IT budgets and resources.
Much like smartphone application development environments, data center application development portals can be used to design any application that leverages open APIs, such as apps for enhanced network visualization, service automation and turn-ups, data center cluster management, or even proactive performance monitoring.
DCI technologies are evolving to deliver a set of highly scalable, user-friendly attributes that simplify the deployment experience and greatly improve productivity. With the Waveserver®, 6500 Packet-Optical Platform, and 8700 Packetwave® Platform, Ciena is the world leader in DCI. | <urn:uuid:9060fd95-5ae9-4432-9a92-1a761c75a4c6> | CC-MAIN-2022-40 | https://www.ciena.com/insights/what-is/What-Is-API.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00376.warc.gz | en | 0.909688 | 675 | 2.5625 | 3 |
Port-based Network Access Control
Port-based Network Access Control (PNAC), or 802.1X, authentication requires a client, an authenticator, and an authentication server (such as a FortiAuthenticator device).
The client is a device that wants to connect to the network. The authenticator is simply a network device, such as a wireless access point or switch. The authentication server is usually a host that supports the RADIUS and EAP protocols.
The client is not allowed access to the network until the client’s identity has been validated and authorized. Using 802.1X authentication, the client provides credentials to the authenticator, which the authenticator forwards to the authentication server for verification. If the authentication server determines that the credentials are valid, the client device is allowed access to the network.
FortiAuthenticator supports several IEEE 802.1X EAP methods.
The FortiAuthenticator unit supports several IEEE 802.1X EAP methods. These include authentication methods most commonly used in WiFi networks.
EAP is defined in RFC 3748 and updated in RFC 5247. EAP does not include security for the conversation between the client and the authentication server, so it is usually used within a secure tunnel technology such as TLS, TTLS, or MS-CHAP.
The FortiAuthenticator unit supports the following EAP methods:
|Method||Server Auth||Client Auth||Encryption||Native OS Support|
|PEAP (MSCHAPv2)||Yes||Yes||Yes||Windows XP, Vista, 7|
|EAP-TTLS||Yes||No||Yes||Windows Vista, 7|
|EAP-TLS||Yes||Yes||Yes||Windows (XP, 7), Mac OS X, iOS,
|EAP-GTC||Yes||Yes||Yes||None (external supplicant required)|
In addition to providing a channel for user authentication, EAP methods also provide certificate-based authentication of the server computer. EAP-TLS provides mutual authentication: the client and server authenticate each other using certificates. This is essential for authentication onto an enterprise network in a BYOD environment.
For successful EAP-TLS authentication, the user’s certificate must be bound to their account in Authentication >
UserManagement > Local Users (see Local users on page 58) and the relevant RADIUS client in Authentication > RADIUS Service > Clients (see RADIUS service on page 91) must permit that user to authenticate. By default, all local users can authenticate, but it is possible to limit authentication to specified user groups.
Port-based Network Access Control EAP
The FortiAuthenticator unit and EAP
A FortiAuthenticator unit delivers all of the authentication features required for a successful EAP-TLS deployment, including:
- Certificate Management: create and revoke certificates as a CA. See Certificate Management on page 132.
- Simple Certificate Enrollment Protocol (SCEP) Server: exchange a Certificate Signing Request (CSR) and the resulting signed certificate, simplifying the process of obtaining a device certificate.
FortiAuthenticator unit configuration
To configure the FortiAuthenticator unit, you need to:
- Create a CA certificate for the FortiAuthenticator unit. See Certificate authorities on page 140.
Optionally, you can skip this step and use an external CA certificate instead. Go to Certificate Management > Certificate Authorities > Trusted CAs to import CA certificates. See Trusted CAs on page 147.
- Create a server certificate for the FortiAuthenticator unit, using the CA certificate you created or imported in the preceding step. See End entities on page 133.
- If you configure EAP-TTLS authentication, go to Authentication > RADIUS Service > EAP and configure the certificates for EAP. See Configuring certificates for EAP on page 102.
- If SCEP will be used:
- Configure an SMTP server to be used for sending SCEP notifications. Then configure the email service for the administrator to use the SMTP server that you created. See E-mail services on page 46.
- Go to Certificate Management > SCEP > General and select Enable SCEP. Then select the CA certificate that you created or imported in Step 1 in the Default CA field and select OK. See SCEP on page 147.
- Go to Authentication > Remote Auth. Servers > LDAP and add the remote LDAP server that contains your user database. See LDAP on page 88.
- Import users from the remote LDAP server. You can choose which specific users will be permitted to authenticate. See Remote users on page 65.
- Go to Authentication > RADIUS Service > Clients to add the FortiGate wireless controller as an authentication client. Be sure to select the type of EAP authentication you intend to use. See RADIUS service on page 91.
Configuring certificates for EAP
The FortiAuthenticator unit can authenticate itself to clients with a CA certificate.
- Go to Certificate Management > Certificate Authorities > Trusted CAs to import the certificate you will use. See Trusted CAs on page 147.
- Go to Authentication > RADIUS Service > EAP.
- Select the EAP server certificate from the EAP ServerCertificate drop-down list.
- Select the trusted CAs and local CAs to use for EAP authentication from their requisite lists.
- Select OK to apply the settings.
Configuring switches and wireless controllers to use 802.1X authentication
The 802.1X configuration will be largely vendor dependent. The key requirements are:
Device self-enrollment Port-based Network Access Control
l RADIUS Server IP: This is the IP address of the FortiAuthenticator l Key: The preshared secret configured in the FortiAuthenticator authentication client settings l Authentication Port: By default, FortiAuthenticator listens for authentication requests on port 1812.
Device certificate self-enrollment is a method for local and remote users to obtain certificates for their devices. It is primarily used in enabling EAP-TLS for BYOD. For example:
l A user brings their tablet to a BYOD organization. l They log in to the FortiAuthenticator unit and create a certificate for the device. l With their certificate, username, and password they can authenticate to gain access to the wireless network. l Without the certificate, they are unable to access the network.
To enable device self-enrollment and adjust self-enrollment settings, go to Authentication > Self-service Portal > Device Self-enrollment and select Enable userdevice certificate self-enrollment.
|SCEP enrollment template||Select a SCEP enrollment template from the drop-down list. SCEP can be configured in Certificate Management > SCEP. See SCEP on page 147 for more information.|
|Max. devices||Set the maximum number of devices that a user can self-enroll.|
|Key size||Select the key size for self-enrolled certificates (1024, 2048, or 4096 bits).
iOS devices only support two key size: 1024 and 2048.
|Enable self-enrollment for Smart Card certificate||Select to enable self-enrollment for smart card certificates.
This requires that a DNS domain name be configured, as it is used in the CRL Distribution Points (CDPs) certificate extension.
Port-based Network Access Control Non-compliant devices
Select OK to apply any changes you have made.
802.1X methods require interactive entry of user credentials to prove a user’s identity before allowing them access to the network. This is not possible for non-interactive devices, such as printers. MAC Authentication Bypass is supported to allow non-802.1X compliant devices to be identified and accepted onto the network using their MAC address as authentication.
This feature is only for 802.1X MAC Authentication Bypass. FortiGate Captive Portal MAC Authentication is supported by configuring the MAC address as a standard user, with the MAC address as both the username and password, and not by entering it in the MAC Devices section.
Multiple MAC devices can be imported in bulk from a CSV file. The first column of the CSV file contains the device names (maximum of 50 characters), and the second column contains the corresponding MAC addresses (0123456789AB or 01:23:45:67:89:AB).
To configure MAC-based authentication for a device:
- Go to Authentication > User Management > MAC Devices. The MAC device list will be shown.
- If you are adding a new device, select Create New to open the Create New MAC-based Authentication Device
If you are editing an already existing device, select the device from the device list.
- Enter the device name in the Name field, and enter the device’s MAC address in the MAC address
- Select OK to apply your changes.
To import MAC devices:
- In the MAC device list, select Import.
- Select Browse to locate the CSV file on your computer.
- Select OK to import the list.
The import will fail if the maximum number of MAC devices has already been reached, or if any of the information contained within the file does not conform, for example if the device name too long, or there is an incorrectly formatted MAC address.
Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!
Don't Forget To visit the YouTube Channel for the latest Fortinet Training Videos and Question / Answer sessions!
- FortinetGuru YouTube Channel
- FortiSwitch Training Videos | <urn:uuid:8d0ec568-5800-4460-9059-2121c3e32d20> | CC-MAIN-2022-40 | https://www.fortinetguru.com/2016/04/port-based-network-access-control-fortiauthenticator-4-0/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00376.warc.gz | en | 0.829411 | 2,078 | 2.75 | 3 |
Researchers Take Step Toward Building Fault-Tolerant Quantum Computer
(Phys.org) Joint Army- and Air Force-funded researchers have taken a step toward building a fault-tolerant quantum computer, which could provide enhanced data processing capabilities.
Researchers at University of Massachusetts Amherst identified a way to protect quantum information from a common error source in superconducting systems, one of the leading platforms for the realization of large-scale quantum computers.
ARO is an element of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. AFOSR supports basic research for the Air Force and Space Force as part of the Air Force Research Laboratory.
“This is a very exciting accomplishment not only because of the fundamental error correction concept the team was able to demonstrate, but also because the results suggest this overall approach may amenable to implementations with high resource efficiency, said Dr. Sara Gamble, quantum information science program manager, ARO. “Efficiency is increasingly important as quantum computation systems grow in size to the scales we’ll need for Army relevant applications.”
the researchers’ experiment achieves passive quantum error correction by tailoring the friction or dissipation experienced by the qubit. Because friction is commonly considered the nemesis of quantum coherence, this result may appear surprising. The trick is that the dissipation has to be designed specifically in a quantum manner.
“Although our experiment is still a rather rudimentary demonstration, we have finally fulfilled this counterintuitive theoretical possibility of dissipative QEC,” said Dr. Chen Wang, University of Massachusetts Amherst physicist. “This experiment raises the outlook of potentially building a useful fault-tolerant quantum computer in the mid to long run.” | <urn:uuid:908a006e-17c8-4d9e-a971-e0489201ae7d> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/researchers-take-step-toward-building-fault-tolerant-quantum-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00376.warc.gz | en | 0.906733 | 365 | 2.671875 | 3 |
One of the primary disasters that a data center can face is a fire, which is why fire suppression systems are critical for data center operations. Fire suppression systems are overseen by the National Fire Protection Association. Within the association, there are 250 Technical Committees with over 8,000 volunteers. The NFPA publishes over 300 codes and standards to minimize the outcomes and risks of fire. These include how to properly install and maintain these various types of systems. Selecting the right fire suppression system can be vital in protecting the irreplaceable assets within your operation.
There are different types of fire suppression systems altogether. The first larger umbrella with fire suppression systems is known as Fire Sprinkler Systems. Within the Fire Sprinkler System, umbrella includes wet pipe, wet pipe antifreeze dry pipe, pre-action, deluge, electronic, foam water sprinkler, water spray, and water mist. There are also Chemical Agent Systems which include wet chemical systems and dry chemical systems. Some other types of fire suppression systems include gaseous agents, fully automatic suppression systems, fully automatic vehicle fire suppression systems, manual vehicle fire suppression systems, and external water spray systems.
The largest of these umbrellas is the Fire Sprinkler system which is a method for active fire protection. This method usually consists of a water supply system, which is released with a specific amount of pressure through a piping system. These systems the most prevalent and are used worldwide.
Data centers are filled with computer servers that require detailed attention. Because these servers are running 24/7, the server racks have the possibility of overheating and even catching on fire. There have recently been several data center fires that have brought to light how data centers are being managed.
Because data centers are packed with server racks, they need a fire suppression system that won’t destroy the equipment. Data centers need a specific type of fire suppression. This is why there are a couple of fire suppression systems that are ideal for data center use including a dry pipe fire sprinkler system and a clean agent fire suppression system.
A dry pipe sprinkler system can be beneficial for a data center because pipes are filled with pressurized air or nitrogen instead of water. This air holds a remote valve in a closed position. This dry pipe valve stops water from entering the pipe until a fire triggers at least one of the sprinklers to activate. After this happens, air escapes, and the valve releases. Only then does water enter the pipe running through the sprinklers onto the fire.
One of the main advantages of a dry pipe fire sprinkler system is pipes won’t freeze due to extremely cold weather, which often happens to other traditional sprinkler systems where the water is already inside of the pipes. This can also be advantageous for another reason. If a traditional sprinkler system gets damaged there is a risk of water damage. This is important for data centers to keep in mind due to the server racks.
These systems, however, are complex and need proper maintenance. The cost of installation and maintenance can be more expensive. Due to its maximum of 750 gallons, these systems are also only designed for average-sized data centers and may not be ideal for hyperscale data centers.
With this being said, dry pipe fire sprinkler systems are one of the best options for data centers and other operations housing electronic equipment. The sprinkler pipes are filled with air instead of water so the servers cannot get wet unless there is a fire present.
Another fire suppression system ideal for data center use is clean agent fire suppression systems like the FM-200. Fires are made up of heat, oxygen, and a fuel source. Eliminating one of these elements will extinguish a fire and stop its spread. The FM-200 uses a chemical gas known as hydrofluorocarbon or HFC. It is colorless compressed liquefied gas to put eliminate heat which then extinguishes the fire. The gas is slow in toxicity and does not show acute or long-term dangers. Following the manufacturer’s guidelines ensures safety.
This type of fire suppression system is safe around people and all occupied spaces, and won’t damage sensitive equipment like servers, electronics, and other machinery. It also won’t leave any residual substances, and there is no clean-up required after the incident. The FM-200 is also more cost-efficient than other similar clean agent systems.
Clean agent fire suppression systems cause no water damage and don’t have any harmful residue. It can be used for a server room in a data center or an exhibit in a museum. These systems won’t damage any of these assets like traditional water or foam fire extinguishers can. It is also non-toxic and environmentally friendly. The gas that is used is usually a combination of nitrogen, argon, and carbon dioxide.
There are several different variations of a clean agent fire suppression system. There is the 3M Novec Fire protection Fluid, which is one of the most eco-friendly options available. This also extinguishes fires with marginal damage and clean up. The ECARO-25 is another clean agent system that uses 30% fewer chemical agents making it more cost-effective for businesses. PROINERT is another option that uses higher pressure than other chemical agents. It’s also stored in a gaseous state and not liquid. These systems can be good options for data centers and other sensitive equipment.
Various aspects can cause a data center fire, which is why data center managers need to be thoughtful about their fire suppression systems. If there is a false alarm and a traditional sprinkler system goes off all of the servers could potentially be damaged. A clean agent suppression system is designed to protect irreplaceable assets and the people within the building. Clean agent systems are ideal for protecting data center servers, electronics, archives, and even artwork.
If you’re looking for a data center service, make sure to research the particular type of fire suppression system they are offering. There is a risk involved in storing your sensitive data within a data center, but a trusted data center provider can make sure your needs are taken care of. | <urn:uuid:da38f68b-3631-4b7c-b000-f10eb15d8226> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/different-fire-supression-systems-in-data-centers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00576.warc.gz | en | 0.925067 | 1,253 | 2.875 | 3 |
What is Spear Phishing?
Spear Phishing Definition
Spear phishing is a cyberattack method that hackers use to steal sensitive information or install malware on the devices of specific victims. Spear-phishing attacks are highly targeted, hugely effective, and difficult to prevent.
Hackers use spear-phishing attacks in an attempt to steal sensitive data, such as account details or financial information, from their targets. An attack requires significant research, which often involves acquiring personal information about the victim. This is typically done through accessing social media accounts to discover information like their name and email address, who their friends are, their hometown, employer, recent purchase history, and locations they visit. Attackers then disguise themselves as someone their victim trusts, usually a friend or colleague, and attempt to acquire sensitive information via email or instant messaging tools.
The threat of a spear-phishing attack is highlighted by 88% of organizations around the world experiencing one in 2019, according to Proofpoint’s State of the Phish report. Of those organizations, 55% suffered a successful spear-phishing attack, while 65% of U.S. organizations were victims to spear phishing.
Spear Phishing vs. Phishing
A common spear-phishing definition used throughout the cybersecurity industry is a targeted attack method hackers employ to steal information or compromise the device of a specific user. Spear-phishing messages are addressed directly to the victim to convince them that they are familiar with the sender. The attacks require a lot of thought and planning to achieve the hacker’s goal.
Phishing is a broad term for attacks sent to multiple people in a bid to ensnare as many victims as possible. Phishing attacks involve a spoofed email that purports to be from a genuine sender or organization. The message contains a link that, when recipients click on it, prompts them to enter their personal information and then downloads malware onto their device.
The key difference between these two attack methods is spear-phishing attackers go after a specific individual, whereas phishing takes a blanket approach targeting multiple victims. Spear-phishing attackers methodically target a victim to use them as a way into an organization or for stealing information, while a phishing actor does not bother who their target is. They just want to steal as much information as possible or cause damage.
Spear phishing requires more preparation and time to achieve success than a phishing attack. That is because spear-phishing attackers attempt to obtain vast amounts of personal information about their victims. They want to ensure their emails look as legitimate as possible to increase the chances of fooling their targets. The highly personalized nature of spear-phishing attacks makes it more difficult to identity them than prevent widescale phishing attacks.
Spear Phishing and Whaling: The Differences and Similarities
Whaling is a form of spear phishing that specifically goes after high-level-executive target victims. It uses the same approach as regular spear phishing, in that the attacker purports to be an individual the recipient knows or trusts. However, whaling often requires even more time and investment in researching and crafting highly targeted messages than spear phishing.
A whaling attack usually targets people with direct access to financial or payroll information or are responsible for making payments. The attacker does the same type of research they would do for a spear-phishing attack to compose a message that appears to be from a trusted colleague. This will likely be the CEO or individual of similar reputation within the organization, but they could also pretend to be a potential supplier. The attacker then sends a message that coerces the victim into sharing financial information or even making payments.
Cyber criminals are willing to put in this time and research as the high-level executives they target are more likely to fall victim to these types of attacks than other employees. This is because executives such as CEOs are often under more pressure, face more time-critical tasks than other employees, and are more likely to underestimate the security risk.
Whaling attacks have also been used to target high-profile individuals, such as politicians and celebrities, which make them vastly lucrative to attackers.
Spear Phishing Prevention Best Practices
While spear phishing is a highly effective method for cyber criminals to maliciously obtain personal information, steal money, and hack organizations, there are ways for businesses and people alike to defend themselves from these attacks.
For example, tools like antivirus software, malware detection, and spam filters enable businesses to mitigate the threat of spear phishing. Businesses should educate employees and run spear-phishing simulations to help users become more aware of the risks and telltale signs of malicious attacks. They should also have an established process in place for employees to report suspicious emails to their IT and security teams.
Five Tips to Avoid a Spear Phishing Attack
- Keep software updated: Wherever possible, it is vital for organizations to ensure they enable automatic updates on software. Doing so protects them from the latest security attacks. It also ensures email clients, security tools, and web browsers have the best possible chance of identifying spear-phishing attacks and minimizing the potential damages. Also, ensure that a data protection program and data loss prevention technology are in place at the organization to protect data theft and unauthorized access.
- Minimize password usage: Passwords are a common target of spear-phishing attacks, and it can be devastating if they get into the wrong hands. No password, or iteration of a similar password, should ever be reused on another account. If an attacker gains access to one, then they gain access to all. Password manager tools can be useful for keeping track of various credentials and making codes as strong and complex as possible. But strengthening security to prevent spear-phishing attempts is reliant on removing password usage wherever possible.
- Deploy multi-factor authentication: Given the risk of relying on passwords, two-factor or even multi-factor authentication is now crucial for all organizations and online services. This adds an extra layer of security on top of simply logging in to a service with a username and a password. It can include information that a person knows, such as their first school or mother’s maiden name, something they have, such as a unique code sent to an authentication app, or something they are, like their fingerprint.
- Educate your employees: An educated, security-conscious workforce is one of the best ways to prevent spear-phishing attacks. It is important that every employee in an organization knows how to spot sophisticated phishing emails, recognizes unusual hyperlinks and email domains, and will not be fooled by unusual requests to share information. A trusty way of avoiding malicious links being clicked is to advise employees to go directly to websites rather than following any links from any email message. This advice should be practiced on people's personal email links and social media accounts, not just in the work environment.
- Use common sense: A big part of spear-phishing avoidance boils down to people using common sense. For example, real businesses never send emails asking people for their usernames and passwords or access codes. People need to question the validity of any email that asks them to share personal information. They should never share financial or payroll information over email or online without speaking to their trusted contact first. They should also be careful about clicking attachments or links in emails. It is likewise important not to make personal information available online and ensure there are privacy settings limiting what people can see.
How Fortinet Can Help?
Fortinet provides industry-leading solutions that protect organizations from the highly targeted, meticulously researched, and sophisticated nature of spear-phishing attacks. Fortinet FortiSandbox confines suspicious files or documents to an isolated environment, away from devices, networks, and users. In this environment, the sandbox analyzes behavior for malicious intent then issues an alert and threat intelligence information to prevent an attack.
Fortinet also protects against spear phishing through its Secure Web Gateway (SWG). Fortinet SWG safeguards businesses from internet-based threats without affecting end-user experiences. FortiMail, a comprehensive, top-rated email security solution, prevents phished messages from reaching employees' inboxes.
Aside from the above security tools, training employees on how to recognize and report suspicious emails is necessary to prevent spear-phishing attacks. Organizations must ensure they practice cybersecurity hygiene to stop attackers from infecting machines and gaining access to their networks.
What is Spear Phishing vs. Phishing?
Spear phishing and phishing are two distinct cyberattack methods. Spear phishing is a targeted technique that aims to steal information or place malware on the victim's device, whereas phishing is a broader attack method targeting multiple people. Both techniques involve emails that purport to be from a trusted source to fool recipients into handing over sensitive information or download malware.
What are the Characteristics of Spear Phishing?
Spear phishing is a highly targeted cyberattack method that is highly effective and difficult for businesses to prevent. The method requires significant research on the part of hackers, who need to acquire personal information about their victims. They then use information like their name, email address, friends, hometown, place of work, and geolocation to disguise as a person the victim trusts.
What Protects Users from Spear Phishing?
Traditional security solutions arm businesses with protection against spear phishing, but attacks are increasingly becoming difficult to detect. User education is crucial to increasing awareness of sophisticated phishing emails and recognizing unusual hyperlinks, email domains, and unusual requests for information-sharing. Businesses must also implement processes that limit access to sensitive information and cause critical damage.
What is Clone Phishing?
Clone phishing is a form of spear-phishing attack. Hackers mimic a genuine email message using an email address that looks valid but contains a malicious attachment or hyperlink that leads to a cloned website with a spoofed domain. The attackers’ goal is for the victim to enter sensitive information on the fake website.
Discover more information about spear phishing and how Fortinet can help your business recognize and prevent modern cyber scams. | <urn:uuid:62403082-77d0-425e-98b1-670392e4dbc0> | CC-MAIN-2022-40 | https://www.fortinet.com/cn/resources/cyberglossary/spear-phishing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00576.warc.gz | en | 0.942519 | 2,069 | 3.4375 | 3 |
Cryptocurrency is a digital currency that is not issued by any centralized authority. There are no physical bills, and the money is secured by cryptography. The virtual currency is based on blockchain technology, which ensures the integrity of transactions. Blockchain helps in verifying the transactions to prevent double spending, as there are no third-party entities such as financial institutions. Digital currency is relatively new – just a decade old. Bitcoin, the first cryptocurrency, was created in 2008 by an unknown person/group of people who used the pseudonym Satoshi Nakamoto. Due to their decentralization nature, cryptocurrencies are not subject to government interference. One can send and/or receive funds without providing personally identifiable information. Hence, cryptocurrencies offer a high degree of privacy; however, absolute anonymity may not be possible.
Cryptocurrency offers more privacy than traditional currencies. Bitcoin, for instance, uses encrypted addresses to hide the identities of the users. Each user is assigned public and private keys, which are hashed using cryptography. A public key is used to receive payment while a private key is used as the user’s signature. The private key verifies that a transaction was authorized by a particular user. Since blockchain technology uses a distributed ledger, transactions are recorded in multiple computers. Hence, they are permanent public records. With a public database of transactions, analytical tools can be used to identify the source and destination of cryptocurrencies.
Moreover, exchanging traditional currency for cryptocurrency may require one to divulge personal information to the exchange firm. In case of a security breach involving the exchange firm, personally identifiable information of cryptocurrency users could be exposed. Although organizations mostly implement advanced cyber defense systems, hackers could use more sophisticated tools to compromise a system. For example, in 2014, Mt. Gox, a leading bitcoin exchange, was infiltrated by cybercriminals who stole 850,000 bitcoins. Government agencies, such as regulatory authorities, could also force an exchange to reveal information regarding its customers. Therefore, cryptocurrencies can be traced to specific individuals through the analysis of the currency exchange records.
One way of increasing anonymity is by maintaining separate identities by using multiple wallets. With several wallets, once can also use multi-input. Multi-input is the use of several addresses for payment. However, as countries develop legal frameworks to regulate the use of cryptocurrencies, it might not be possible to have multiple identities in the near future. For instance, South Korea is enforcing stricter regulation of cryptocurrency; users will only be able to deposit if the name on the wallet matches the name on their bank accounts. This requirement will lower the degree of anonymity. Therefore, even with multiple inputs, transactions can be traced to the actual sender/receiver.
The high degree of anonymity provided by cryptocurrencies favors illicit activities such as money laundering, tax evasion, transfer of money to terror organizations, and demands for ransom by criminals. For instance, the WannaCry malware, which infected so many computers in 2017, required victims to pay a ransom in bitcoins. Preventing these crimes necessitates increased regulatory measures. The ability to transact anonymously is a crucial attribute of digital currencies. Nevertheless, the need to combat crime supersedes people’s desire to maintain anonymity in their online transactions. Therefore, cryptocurrency may face stringent regulations in the future.
The number of blockchain technology users has grown tremendously within the last decade. As of September 2019, there were more than 42 million users of blockchain wallet. This figure is a significant number, considering that the technology is barely ten years old. After bitcoin pioneered the market, several other cryptocurrencies have been developed; they include Ethereum, Litecoin, Monero, Dash, and Ripple. In the near future, the number of users of cryptocurrency is expected to increase as people become more familiar with the technology.
The complexity and volatility of cryptocurrencies is a major challenge that is likely to hinder people’s acceptance of virtual currency in the future. Unlike the conventional currencies that are controlled by a central bank, digital currencies are highly susceptible to market price changes. This means that an investor could lose a considerable amount of his/her investment if the value of a digital asset plunges. The other concern is the difficulty of use. Since cryptocurrency is based on a distributed ledger, the user should have knowledge of information technology. Thus, enhancing the ease of use would be needed in the future to increase the adoption of virtual currency.
Cryptocurrency has attracted a significant number of users over the few years of its existence. Digital currency provides increased privacy than conventional currency. However, perfect anonymity may not be possible to guarantee. As criminals leverage the privacy provided by virtual currency, governments are beginning to establish regulations that may reduce the level of anonymity. The difficulty of use and the volatility also hinder the acceptance of the digital currency. Regardless of these challenges, more people are likely to adopt cryptocurrency in the future.
Antonopoulos, A. M. (2015). Mastering Bitcoin: Unlocking digital cryptocurrencies. Sebastopol, CA: O’Reilly Media, Inc.
Newby, T. G., & Razmazma, A. (2019, April 7). An untraceable currency? Bitcoin privacy concerns. Retrieved from https://www.fintechweekly.com/magazine/articles/an-untraceable-currency-bitcoin-privacy-concerns
Smith-Spark, L. (2017, May 13). Global ransomware attack: 5 things to know. CNN. Retrieved from https://edition.cnn.com/2017/05/13/world/ransomware-attack-things-to-know/index.html
Szmigiera, M. (2019, October 7). Number of blockchain wallet users worldwide from 3rd quarter 2016 to 3rd quarter 2019. Statista. Retrieved from https://www.statista.com/statistics/647374/worldwide-blockchain-wallet-users | <urn:uuid:56abefc3-d2e5-47d6-9015-c6299c39f081> | CC-MAIN-2022-40 | https://cbisecure.com/insights/how-anonymous-is-cryptocurrency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00576.warc.gz | en | 0.931454 | 1,194 | 3.609375 | 4 |
The need for the data center in our day and age is unavoidable, but the toll this dependence takes on global environments is unmistakable. In 2016, it was reported that worldwide data centers consumed around 3 percent of global electricity supply and accounted for about 2 percent of total greenhouse gas emissions — amounting to the same carbon footprint as the airline industry.
When put in the context of a strong growth in demand for data center services (Marketsandmarkets states that the data center colocation market size is expected to grow from $31.5 billion in 2017 to $62.3 billion by 2022) and an expanding amount of global data, this environmental issue grows bigger and more urgent. With businesses and individuals alike continuing to generate astonishing amounts of data and depending even more on the storage, access and transport of that data, the carbon footprint of data centers will continue to trend upwards if not disrupted.
The Great Environmental Pivot
Fortunately, action is being taken by the industry to redefine data centers for more ethical and efficient design and operation. Hyperscalers like Amazon, Google and Microsoft have committed to large-scale renewable energy initiatives, some even committing to achieving 100 percent renewables in a matter of years. On a more widespread scale, operators everywhere are shifting their sights to strategies that make cooling more efficient, reduce power consumption and more.
While there’s no single solution to unlocking a comprehensively green data center environment, some of the solutions that have emerged span cooling designs, market location, energy sourcing and more. The new breed of green data centers will look to wind, hydroelectric or solar energy solutions as clean sources of ever-important power. When it comes to cooling — one of biggest power draws in the data center, especially as higher densities create more heat and demand more from temperature regulation measures — zero water cooling systems or water reclamation techniques have helped diminish the strain on water supplies. Optimized airflow with innovative real-time monitoring smart sensors is also helping to create further efficiencies in the internal data center environment. Outside the walls of the facility (or before the walls are even constructed), new criteria for siting data centers are targeting large bodies of water that provide an abundance of cooling resources to draw from.
Doing Our Part
Still, pioneering truly green solutions means looking at data centers and their builds through a more creative, critical eye. At DP Facilities, one of our core goals is to be a good community steward, developing ethical and economically beneficial projects in locations to create jobs and financial opportunities, as well as help the environment. To support this, we base our sustainability model on our Four R’s: Reduce (consumption of energy and water), Restore (exhausted marginal land), Replace (fossil fuels with green energies) and Reclaim (offer economic development in areas that need it most).
For our facilities, this means seeking out what we call ‘scarred’ land — space that can’t be otherwise used for builds like schools, but can be effectively repurposed into a data center site. For instance, our world-class Hannibal, Ohio, facility sits on the location of the former Ormet aluminum smelter site in Monroe County, which closed in 2013. Beyond repurposing the land to ensure its potential isn’t wasted, the site also provides green solutions due to its specialized use of Fortress Transportation and Infrastructure Investors LLC’s (FTAI) 485 Megawatt natural gas-fired Long Ridge Energy Terminal property. This new gas-fired, combined-cycle power plant is designed to be one of the most energy efficient in the world. At our Mineral Gap Data Center facility in Southwest Virginia (the heart of coal country), we’re also building a solar farm, supplying clean power for our facility and participating in the region’s transformation from a resource-extraction economy to a renewably powered ecosystem. This build represents the first large-scale solar farm in Southwest Virginia. Overall, the DP Facilities model facilitates real consumption reductions and enhanced sustainability by focusing on sites that can be restored to support surrounding communities and provide direct access to renewable energy. Combined with highly efficient designs, this enables us to hit all of our four core sustainability pillars.
At the end of the day, transparency is key for sustainable and environmentally ethical accountability for the operator. On the customer side, driving potential tenants to value sustainability is all about helping them understand the benefits on both the global scale and within their own business scope. In truth, sustainability doesn’t just help humankind, it helps the bottom line — and it’s time to make eco-friendly initiatives the gold standard for global data centers. | <urn:uuid:88a6b0ae-f9a0-4534-be06-0d9992a103ca> | CC-MAIN-2022-40 | https://www.dpfacilities.com/blog/greener-pastures-sustainability-data-center | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00576.warc.gz | en | 0.926498 | 951 | 2.59375 | 3 |
The COVID-19 pandemic accelerated technology development and implementation across every industry, but of course the healthcare industry, out of necessity, is at the forefront of digital transformation. Non-contact sensor tech has taken center stage as healthcare providers seek to deliver care safely, and they’ll continue to play a vital role in the war against COVID-19 and other diseases in the future. Let’s take a look at what a few of these advances in healthcare technology solutions look like and how they are being used to change patient care as we know it.
#1 - Diagnoses via audio analysis
Pneumonia kills nearly 2 million children annually. Diagnosing the disease early, along with the right care, can help prevent many of these casualties. This can be challenging in remote populations where modern imaging and labs aren’t available, to say nothing of qualified medical staff necessary to treat patients safely. New research has successfully tested a contactless method for accurately diagnosing pneumonia that could be a game changer for remote populations. It works by recording the coughs of patients suffering from an acute respiratory illness (e.g. asthma, bronchitis, pneumonia). Those audio files are then analyzed by a logistic regression classifier, which has been trained to identify pneumonia based on previous audio of confirmed cases. So far, the method can distinguish pneumonia from other illnesses at a sensitivity rate of 94%.
#2 - Cameras monitoring vitals
Knowing a patient’s heart rate, temperature and blood oxygenation couldn’t be more essential to providing adequate care, yet traditional monitoring methods are a mess of wires and beeps. Many times the devices themselves can be uncomfortable to wear, and there’s the most obvious issue of having to wear something. The good news is companies are beginning to bring to market optical systems that can accurately measure a patient’s: heartrate, breathing rate, blood oxygenation, blood pressure and temperature. That’s a lot of wires that were just eliminated. These systems work by installing a camera overhead inside a patient’s room, and then the video is analyzed in real time by machine learning software that interprets the data and transforms it into readable data.
#3 - Sensors aiding healthcare workers
Apart from patient monitoring, non-contact technology is working to make life easier and safer for healthcare workers. Automatic door opening is one key area, especially in highly restricted areas. New door sensor technology works together with mobile credential tech to detect a worker’s identity on their mobile device or fob and then promptly open and close a door. Video intercoms are now being installed in healthcare facilities that contain sensor technology that detects motion (like waving), so workers can initiate a video call. Security providers are also getting in on the action with touchless control panels that can be armed or disarmed by voice or other wireless credentials. Automated wellness kiosks are a great way to free up healthcare workers for more important tasks. These kiosks can take your temperature, display important information on large digital signage and dispense hand sanitizer.
For more information on how contactless sensor technology and advances in healthcare technology can help your customers, visit our IoT Marketplace at https://iot.ingrammicro.com
or contact us at email@example.com. | <urn:uuid:9d731435-e668-4dd8-9362-1cbbe6a375e6> | CC-MAIN-2022-40 | https://imaginenext.ingrammicro.com/iot/3-ways-non-contact-sensor-tech-is-changing-healthcare | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00576.warc.gz | en | 0.932798 | 680 | 2.859375 | 3 |
In the past decade, people have come to rely on smart devices such as phones and tablets, using them for work, communicating with friends and family, streaming shows, and more. What is rapidly becoming an even bigger presence in their lives, however, are automated devices such as smart thermostats, in-vehicle infotainment systems, smart home security systems, and smart medical devices that are controlled remotely or even run completely autonomously.
The growth of IoT brings risk
Considering 75 billion IoT devices are expected to be connected by 2025, each household will have dozens of connected devices, all communicating autonomously with servers that collect and analyze data. Meanwhile manufacturers, utilities, and other businesses will increasingly employ industrial IoT to make their operations more efficient and safer. But with this global glut of devices comes an ever-greater security risk of someone sending malicious commands to a device, or a rogue device wreaking havoc in a system or accessing services that it shouldn’t. Organizations need to protect the integrity of their business model by ensuring devices operate within their authorized context only.
Because devices can’t use usernames and passwords, they have to use a different mechanism when authenticating to services. The solution to keeping devices secure lies with using public key infrastructure (PKI). PKI creates trusted ecosystems and enables strong encryption of transmitted data while keeping devices safe from hacking attacks. With a variety of PKI services available, however, choosing the right one can be difficult. Here’s a look at some of the key questions you need to ask before selecting a PKI service provider.
What to look for when choosing a PKI service provider
How secure is their process
Running a proper PKI service is a significant undertaking, it’s much more complex than hosting a server with a few HSMs. It’s a massive undertaking if done properly, and requires physical and logical security to be deployed, as well as strict policy and vetting of staff. The data centre holding the servers and HSMs needs to be a physically secured environment with access limited to authorized personnel only. Security measures might include guards, biometric authentication mechanisms for authorized individuals, and surveillance systems to monitor and record who enters and leaves the facility. Keys also need to be protected from insider threats, so they should employ multi-custody protocols that require two or more people to be involved in order to complete a sensitive operation. In addition, a strong, secure and reliable disaster recovery process needs to be in place.
Can they help you navigate PKI for IoT
Setting up a PKI is a daunting task, it’s not just the infrastructure, hardware security modules, secured facilities, policies, auditing etc., it’s also the expertise required. Is your PKI service provider willing to help you define your specific infrastructure, do they have a team of world-class PKI and security experts willing to assist you in defining a solution to meet your specific needs. Defining a device identity that works for you, not just today but also in the future is a complex task, and many PKI vendors don’t have a great deal of experience doing this.
Does it provide flexible key provisioning options?
The process of providing a device with an identity is referred to as provisioning. Devices go through various stages designed to fulfill different security and key provisioning requirements. Once manufactured, the device identities need to get from the manufacturing source to the devices and services. There are two main approaches to provisioning device identities: factory provisioning and cloud-based field provisioning.
Increasingly, organizations are concerned about untrusted factory environments, especially by third parties in low cost geographies, where not all factory floor workers can be trusted to have access to sensitive keying material. With factory provisioning, the device identities are bound to the device in a factory during the manufacturing process. The primary reason to employ factory provisioning is to take advantage of secure hardware. Many modern chipsets have specialized hardware features such as one-time programmable memory (electrical fuses) and other on-chip storage which can be used to store keys securely.
With cloud-based field provisioning, the device is given some minimal identity at manufacturing time, but it does not receive a complete identity until it is installed by the end user in the field. This is required if the identity of the device cannot be completely known until it is deployed. For example, the IoT service provider may choose an OEM or chipset provider well after those devices have been manufactured. In order to participate in the IoT service’s trusted ecosystem, the device needs a more complex identity than it was initially given.
How easy is it to scale?
The scale of IoT presents a variety of new challenges when it comes to taking devices to market. Manufacturers often aim to bring hundreds of thousands of devices at a time. And these numbers can go much higher when you factor in hardware revisions and device generations.
Each of one these devices has to be provisioned with unique secure device identities before they are ready for consumers to buy off the shelves. To maintain an effective and trusted ecosystem, each device identity must be different to help define capabilities and permissions for each device as well as enable compromised devices to be shut out. This can mean that as a company grows, they run the risk of outgrowing either their in-house PKI capacities or their third-party PKI service provider. While many organizations implementing PKI start small, as they continue to grow they will need something that meets their expanding requirements.There are different ways to manage and handle this scale—multiple root CAs, single root CA with a hierarchy of subordinate CAs, etc. Irrespective of the strategy, the basic objective here is to set things up correctly from the beginning, so that increasing needs can be easily addressed. It’s sensible for you to question if a PKI service provider can keep up with your future demand without delays, cost increases, or a drop in service availability.
With Intertrust PKI we’ve created a system that is built to grow as our clients do, allowing us to provision up to 10 million device identities a day. We’ve already provisioned over 1.5 billion IoT device identities around the world.
What is the track record of the PKI service provider?
In the field of trust and privacy management, longevity and experience indicate that a PKI service provider delivers what they promise and customers receive value from the service. With a well-established key provisioning service, you have the advantage of being able to research their performance and success with similar customers. If you serve an industry that requires compliance with strict regulations, such as medical devices, a solid reputation can be critical. While a newer PKI service provider may be perfectly satisfactory, when it comes to trust, experience and a proven track record are a plus.
How much is it going to cost?
Having an in-house PKI service can give an organization greater control, but also means that they have to maintain a department with the skills and expertise to monitor and manage it, rather than focusing on their core objectives of device creation and innovation. A third-party managed PKI service provider can replace an in-house operation, although the relationship, services, and scalability can differ depending on their capabilities. Calculating the costs of an in-house vs. managed PKI service is vital when pricing security into P&L projections and identifying potential synergies and savings.
Intertrust PKI is one of the leading PKI service providers, used by manufacturers across the world to ensure the security of their trusted ecosystems. We offer a full range of key provisioning services, such as mutual authentication, access control, and secure over-the-air updates to create an incredibly safe infrastructure that allows you to focus on what you do best.
Moreover, our service scales with ease and provides cost savings of 50% – 85% over an in-house PKI. To find out more about how Intertrust PKI can keep your devices secure at every stage of their lifecycle, get in touch with our team today.
About Prateek Panda
Prateek Panda is Director of Marketing at Intertrust Technologies and leads global marketing for Intertrust’s device identity solutions. His expertise in product marketing and product management stem from his experience as the founder of a cybersecurity company with products in the mobile application security space. | <urn:uuid:68663d8a-d772-4ed3-8aef-bf319ecc05c1> | CC-MAIN-2022-40 | https://www.intertrust.com/blog/what-to-look-for-when-choosing-a-pki-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00576.warc.gz | en | 0.949303 | 1,721 | 2.515625 | 3 |
Online criminals pose a danger to anyone with access to the internet. Some, however, are better prepared to combat online crime than others.
Just in the last six months, cyber-attacks have increased by 29% as threat actors continuously exploit the pandemic. Groups that employ ransomware tactics have entered a golden age, with the use of extortion growing at a breakneck pace of 93% in less than a year.
Nation-states around the world, however, do not stand idle. Multilateral efforts help push some ransomware gangs offline, while cybersecurity programs empower businesses and organizations to counteract threats looming on the dark side of the internet.
A fraud detection company, SEON, has taken a look at 100 countries to see how much their citizens need to worry about cybercrime. To get to the bottom of the issue, researchers analyzed data from numerous cybersecurity indices and indicators.
SEON cross-referenced rankings on national cybersecurity measures with rankings on online money laundering, risks internet users face, and strength of national legislation meant to prevent cybercrime. To rank the countries in an orderly fashion, researchers assigned each nation a score.
The ranking has put Denmark on the top of the list of countries where cybersecurity is the strongest and citizens are most protected from online crime via technology and legislation. Germany took the silver medal, while the US took third place.
While Denmark might seem the odd one of the group, Germany and US are of particular interest to cybercriminals due to a wealth of businesses and highly connected population. Preventing cybercrime with legislation and technology, therefore, is somewhat of a necessity.
On the opposite side of the spectrum, Myanmar was awarded the lowest score in terms of cybersecurity. According to the researchers, there's little to no legislation concerning online crime. Cambodia scarcely tops Myanmar and Honduras. The latter trails low in terms of all criteria, apart from the legislative side of cybersecurity.
Cyberattacks are increasing in scale, sophistication, and scope. The last 12 months were ripe with major high-profile cyberattacks, such as the SolarWinds hack, attacks against the Colonial Pipeline, meat processing company JBS, and software firm Kaseya. Pundits talk of a ransomware gold rush, with the number of attacks increasing over 90% in the first half of 2021 alone.
An average data breach costs victims $4.24 million per incident, the highest in the 17 years. For example, the average cost stood at $3.86 million per incident last year, putting recent results at a 10% increase.
Reports show that people most vulnerable to cybercrime tend to be adults over 75 and younger adults. Criminals were taking advantage not only of the uncertainty caused by the pandemic but also the flood of new users to digital channels, who were especially susceptible to attack.
More from CyberNews
Subscribe to our newsletter | <urn:uuid:6e350c38-3c4a-44a7-a585-ad2c8473f202> | CC-MAIN-2022-40 | https://cybernews.com/news/should-you-worry-100-countries-ranked-by-cyber-safety/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00776.warc.gz | en | 0.947872 | 577 | 2.625 | 3 |
Cyberattacks continue to evolve, becoming ever more complicated to detect and respond to. For this reason the need to modernize how these threats are addressed should be a top priority. Adopting a risk-based approach to cybersecurity can help identify potential vulnerabilities and make strategic decisions based on the likelihood and impact of each vulnerability. There are some important questions that need to be asked when addressing the risks and trying to stay ahead of emerging threats.
The easier it is to monitor network activity, the faster a facility can respond once an attack is detected, which will reduce the impact of the attack. Therefore, one of the most important steps in protecting programmable logic controllers (PLCs) and programmable automation controllers (PACs) from security threats should begin even before an attack is detected.
Using network communication port monitoring will make checking for unexpected network protocols, connections, or communications types easier. While unexpected activity on a network may not end up being a threat, it should raise a red flag, and is always worth investigating.
Most enterprises are aware that they should implement anti-malware software on HMI and SCADA servers, but it is just as critical to set up anti-malware software on every device that will connect to these control systems – including laptops, tablets, smartphones, or any other device that might share a network with the control systems because a compromised ancillary device may provide a hacker the perfect gateway to a system’s data.
When implemented across an entire facility, centralized anti-malware software can prevent, detect, and remove malicious software, making the monitoring process more effective and efficient.
Limiting system damage to PLCs and PACs
No matter how prepared a company is, security attacks and breaches can still happen. Therefore, it is important to not just try to prevent them from occurring, but also ensure that if they do occur, the damage is as minimal as possible.
One way to limit network damage is to have more than a single security control; implement a robust, tiered approach with security controls at many independent levels that an attacker must breach in order to truly compromise the entire system. Having the right cybersecurity defense in-depth strategy helps avoid safety issues and plant shutdowns.
Segmenting networks into logical zones helps to thwart internal threats, which, while less common, can often result in the most damage. Having separate zones ¬– often described as `enhanced network segmentation’ – is more challenging to implement and maintain compared to traditional network segmentation; however, it is considered one of the best ways to protect control assets.
At a minimum, facilities should ensure secure deployment with firewalls and segmentation to block unsolicited incoming traffic, as well as isolate networks to restrict data transfer to its intended locations. Use of advanced or application layer firewalls is a good approach to increase this capability.
Another way to limit the effects of a breach is by utilizing redundancy or including backup components in a system, so that itm can continue to function in case of a component failure or security breach.
Finally, one of the most important ways to limit the impact of a security breach is establishing effective and sound business continuity or recovery processes and policy that are practiced, so that a breach can be dealt with before its impact has the chance to spread and mitigated from future threats.
Locking down all unused communication ports and turning off all unused services are other simple steps that should be taken to reduce the surface area that can be attacked.
Facilities should work with vendors who have proven certifications for PLCs and PAC systems that span the design and engineering technical security requirements for a control system. Certifications enable control system vendors to formally illustrate compliance of their control system produced with cybersecurity requirements.
Monitoring machine-to-machine communication within a facility is another critical step when striving to ensure an attack is not occurring. All communications should be done securely through protocols for industrial automation such as OPC UA, which offers robust security that consists of authentication and authorization, and encryption and data integrity. By monitoring the network communications, newly opened ports or protocols being used alert users to potential threats.
Managing PLC user authentication
Unintentional behaviors can be one of the most critical threats to an organization. To lead change it is important to adopt best in class behaviors and help educate the workforce on steps to mitigate risk. For example, one of the biggest threats to security is password selection. In a world where some of the most common passwords are `password’ or `123456’, it cannot be stressed enough how important it is to instruct users to select strong passwords and to offer guidelines about how to do so.
Require user authentication between a client application(s) and a server to ensure that only authorized users are accessing the server. Multi-factor authentication and role-based access control are the best options if a system can support this level of security.
Isolating the PLC network
The biggest risk posed by remote network access is that it makes it possible for a hacker to gain deeper access to an organization from outside of it, and once they do, it becomes very challenging to prevent unplanned shutdowns, loss of control, data loss, etc. Businesses should also audit their PLC network to locate any obscure access vectors a hacker might use, and regularly monitor the access points. A company can implement multi-factor authentication, which requires a user to successfully present two or more pieces of evidence— or factors—to an authentication mechanism in order to be granted access to a device, application or information.
Two-factor authentication is a commonly used subset of multi-factor authentication. This method confirms a user’s claimed identity by using a combination of two of the following different factors: something they know, like a password; something they have, like a keycard or software token; or something they are, like presenting fingerprint or facial identification.
Five digital transformation trends in manufacturing for 2021
Industrial controller cybersecurity best practices
Industrial control system cybersecurity breaches will happen | <urn:uuid:1454627d-35a1-4724-afca-776d08b28efc> | CC-MAIN-2022-40 | https://www.industrialcybersecuritypulse.com/facilities/protect-plcs-and-pacs-from-cybersecurity-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00776.warc.gz | en | 0.923277 | 1,330 | 2.640625 | 3 |
Trusted Platform Modules: 8 Surprises for IoT Security
Part 1 of a two-part TPM Surprise Series
Built into billions of devices, a Trusted Platform Module (TPM) is usually a specialized chip on an endpoint’s motherboard that stores cryptographic keys on behalf of its host system for authentication and protection of the endpoint. Each TPM chip contains one or more unique key pairs, certified by the vendor, called endorsement keys (EKs), for validating the TPM’s authenticity. A TPM can also store platform “measurements” that identify software and firmware running on the platform. To stop the TPM from protecting the system, a hacker would have to interfere with it physically. In addition to their popularity on the PC and network side, TPMs will be architected into billions of Internet of Things devices.
Surprise 1: TPMs are passive, not active devices. They do not control anything on the host system they are embedded on.
A widespread misconception is that a trusted platform module somehow controls the system it’s a part of, but a TPM is 100 percent passive with respect to the rest of the system.
The trusted platform module is a self-contained component that has its own storage and processing capabilities, which it uses for protected operations on internal resources such as keys and measurements. These resources, however, are data that are given to the TPM, or that it is asked to generate.
Typically, boot code uses the TPM to store measurements of software running on the system, and applications use the TPM to protect the application’s keys and report measurements. These activities are all externally driven, not initiated by the TPM.
Surprise 2: A TPM is only useful when other things in the device take advantage of it.
The TPM is part of a broader security ecosystem that includes everything from the BIOS to motherboards to account passwords. To obtain value from the TPM, system designers must create systems that rely on the TPM’s internal resources. In traditional TPM implementations, software is “measured” before it is run in order to identify rogue software. The measurements are stored in the TPM, giving it second-hand awareness of “bad” software. The TPM will protect keys it holds, refusing access to rogue software that does not meet the expected measurements. For example, for solutions like Microsoft Bitlocker, an attacker booting to the wrong OS could not decrypt data on the hard drive. Similarly, a TPM might not allow a key to be used to authenticate a device to a bank, preventing an attacker from unauthorized account access.
With proper integration, a TPM can support the security of billions of future IoT devices that would otherwise be difficult to protect. By creating system dependencies on a TPM for devices like automotive electronic control units (ECU), system designers can make it much more difficult to swap out a system component without detection.
Surprise 3: A TPM doesn’t help much — if at all — with the heralded secure boot.
Secure boot is a hot topic. Upon startup, a device should run only the authorized code, not rogue software planted by a malicious actor. However, TPMs don’t provide secure boot. This occurs before the TPM comes into play. When a system powers on, early boot code (such as a UEFI BIOS) must decide which software will run next and which measurements are sent to the TPM. After the secure boot decisions are made, then the TPM can be used. The currently-running software can use the TPM to authenticate or decrypt the next piece of software before it loads, but this does not protect a system if an attacker can get at the early boot code.
The TPM can support a well-designed boot process (including “measured boot” or “trusted boot,” which we will discuss later), but the TPM has no impact on a secure boot.
Surprise 4: TPM has not been particularly successful considering how long it’s been available.
TPM has withstood significant scrutiny and is well established in the security community. Given TPM’s favorable reputation, its longevity (more than 20 years and counting) and the fact that TPMs have shipped in volume in PCs since 2005, it’s surprising how few people really know how to work with them. As a result — especially in the IoT realm — TPMs are not being tapped to their full potential. TPMs became a ubiquitous checkoff item on RFPs for PC-related projects and appear in billions of devices today, but most devices use TPMs minimally, or not at all.
The good news: TPM 2.0 is more flexible than the original TPM specification, allowing the newest TPMs to be applied to many embedded applications, including industrial sensors and smart home devices. Example: There is a TPM 2.0 profile for using TPM on limited-functionality ECUs for automotive applications. Now, designers and developers can more easily select granular TPM functions, whether for vehicles or a valve controller at a water utility.
Surprise 5: Leveraging TPMs is exceedingly difficult. They were not designed to be user-friendly… and they’re not.
It took the top companies in the PC industry — Compaq, HP, IBM, Intel, Microsoft and others — years to build the ecosystem needed to make implementation of TPMs for PCs feasible. These companies carved out the TPM space, driving updates to hardware, firmware and software and defining new protocols. The expectation (or at least hope) was that with this infrastructure, TPM would become an effective enabler, acting like an interstate highway for security. That is, they would provide a smooth, straight, easy way to get to the destination. Unfortunately, TPM turned out to be so complicated that even with this rich ecosystem, almost nobody built solutions to leverage it.
Surprise 6: TPMs aren’t cheap.
Keep in mind, TPMs are hardware. Then, remember they’re not just hardware. Implementing a TPM solution also entails software, the device’s physical design, re-architecture of the system and modifications to integrate with the broader infrastructure. Adding a TPM could increase the cost of a device by fifty cents or more. For many embedded applications, that added cost is a dealbreaker. For devices already being re-architected or that have high security requirements, like those used to operate and secure industrial sites or critical infrastructure, the incremental cost is more likely to be justifiable.
Surprise 7: If a TPM is only used as a secure repository for encryption keys, money is probably being wasted.
Despite having a range of capabilities, TPMs are often used solely to protect symmetric or asymmetric keys, but simpler hardware or software-based designs can often do that job just as well as a TPM. If your platform already has a TPM, by all means, use it for key protection, but if you have a TPM, why not take advantage of the TPM’s more powerful features such as measurement-based access control and remote attestation?
Surprise 8: To retrofit an existing system, a hardware TPM is a non-starter.
Forget about it. Here’s why: the TPM must be architected into the overall system from the beginning. It’s not a last-minute add-on to plug in once a device has been produced. It’s hardware, and the platform must physically accommodate it. Moreover, the TPM must be fully integrated into the boot process and security functions of the platform.
Firmware or software-based TPMs offer alternatives. They are typically less secure than hardware-based TPM, but they can more easily be integrated into your design.
Making a TPM a requirement for your system will not magically cure your security problems and the integration and implementation is (much) easier said than done, but don’t take that as a negative verdict. The TPM has already shown tremendous utility in system security, particularly for PCs — and it has great potential for the IoT market. Taking full advantage of TPM means going into depth. In Part Two of this series, we’ll drill into more TPM surprises, specifically their role in the Internet of Things and when they might be right for you.
Ari Singer, CTO at TrustiPhi and long-time security architect, is a former chair of both the Trusted Computing Group’s TPM work group and the TPM Software Stack (TSS) working group. He was a key contributor to the TPM 1.2 and 2.0 specifications, and has led teams that developed multiple TSS and TPM firmware implementations and TPM-enabled applications. With 16 years in trusted computing, Singer was an influencer in other security standards including EESS, IETF, IEEE 802.15.3 and IEEE 802.15.4. He was also chair of the IEEE P1363 working group. | <urn:uuid:deb53a10-892b-4067-926e-b8fb93ba55a5> | CC-MAIN-2022-40 | https://www.iotworldtoday.com/2019/02/07/trusted-platform-modules-8-surprises-for-iot-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00776.warc.gz | en | 0.944 | 1,914 | 2.8125 | 3 |
What is data discovery?
- Identifying, cataloging and classifying business-critical and sensitive data
- Detecting information about your data’s structure, content and relationships
- A critical component of any data governance program
- All the above
As you probably guessed, the answer is “d. all the above”!
During our second installment of the Back to Basics: Data Catalog webinar series, our experts Sid Bardoloye, Principal Product Manager, and James Sizemore, Data Governance Domain Expert joined Nathan Turajski, Sr. Director, Product Marketing and me to define data discovery within the context of a data catalog. We also covered how data discovery is performed, what it enables and why it’s important for data governance.
If you missed this episode of our series (or our first episode), you can now watch both episodes on demand: Back to Basics Series Episode 2: Data Discovery. We’ve also summarized some of the highlights of this episode here:
1. Data Discovery Enables You to Find, Understand and Trust Your Data
Data discovery begins with scanning for data across your organization’s landscape, which may include on-premises or cloud-based sources, from data warehouses to ETL and BI tools, SAAS applications and more. Once you’ve located your data, you can discover further information about it, such as its structure, content and relationships. This information can be used to catalog data, enrich its metadata and provide context to help you understand what data exists, its source, its lineage, how it’s related to other data, etc.
Increasing transparency and enabling people to find and comprehensively understand your organization’s data builds trust in it, which is a must-have for data-driven organizations.
2. Automation is Key for Data Discovery
While data discovery can be performed manually, it’s not ideal for organizations with a vast data landscape — it’s simply not scalable since discovering data across a broad range of sources can be an extremely complex, costly and time-consuming process.
Luckily, organizations can use artificial intelligence (AI) techniques, such as classification and clustering, to facilitate data discovery. When data scientists and analysts don’t have to spend time manually performing data discovery, they can spend more time doing their jobs, saving organizations time and money.
Read more about AI-based discovery techniques in our eBook, Extract Value from Your Data with AI-Powered Data Discovery.
3. AI and Human Intelligence Are a Winning Combination
While organizations can rely on AI for scalable data discovery, human intelligence can enhance the process. Having a human review and curate data assets can augment and improve AI-driven data discovery. For example, within a data catalog, individuals have the ability to provide additional business context, and certify, rate and review AI-curated assets, helping users understand more about the data.
When human involvement occurs, utilizing approaches such as bulk operations help to expedite repetitive tasks and optimize efficiency. Additionally, in an intelligent data catalog, AI can be trained using human input and feedback to further refine and improve the accuracy of the automated data discovery processes mentioned previously (i.e., classification, etc.).
4. Data Discovery Helps Organizations Extract Value from Their Data
Organizations can utilize data discovery to locate and classify personally identifiable information (PII) across data assets, which is extremely valuable and necessary to ensure compliance with external data privacy regulations that require organizations to know where this information is located, how it’s used and who has access to it.
While discovering sensitive data is a significant use case for data discovery, discovery can also enable organizations to fuel analytics programs, including customer experience and loyalty initiatives, and accelerate cloud modernization, driving value creation opportunities for organizations.
5. Data Discovery is Critical for Data Governance
Data discovery is an integral part of any organization’s data governance journey. Organizations need to 1) understand what data they have, along with its context, 2) determine what data needs to be governed and 3) ensure that relevant data is being governed appropriately. Data discovery supports each of these requirements by helping organizations locate their data, facilitating metadata enrichment and classification, advancing data intelligence and much more.
Register for the Remaining Episodes of the Back to Basics: Data Catalog Webinar Series
Be sure to join us for the third episode of our Back to Basics: Data Catalog series, where you’ll learn the basics about data lineage, a fundamental element of data cataloging. You’ll also have opportunity to have the data catalog experts answer your questions.
Our next episodes in the series will cover data asset analytics and data and analytics governance. Join us for one or all of the upcoming episodes. Register today! | <urn:uuid:28bb9b92-cf4a-4702-bc87-c402cf9b545c> | CC-MAIN-2022-40 | https://www.informatica.com/blogs/5-key-takeaways-from-back-to-basics-data-discovery.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00776.warc.gz | en | 0.907601 | 987 | 2.609375 | 3 |
For over ten years, the gaming industry is thriving: Games on smartphones and on personal computers are enjoyed by tens of millions of consumers worldwide. The gaming market is already valued at tens of billions of dollars and is expected to continue to grow at a significant Compound Annual Growth Rate (CAGR). Modern games are pleasant, user-friendly and create an entertaining experience for their users. In several cases they are also seen as powerful tools for exercising critical thinking, improving team spirit, and boosting collaborative problem-solving. These properties have given rise to business uses of gaming: Nowadays many enterprises leverage games and gamification concepts in business applications.
The term “serious games” denotes games that serve other purposes than entertainment, such as learning, nudging, behavioral change, and soft skills development. The rationale behind the development and use of serious games is to develop an engaging, entertaining, immersive, and pleasant experience for their end-users. In this direction, the gaming scenario is considered in conjunction with learning strategies and domain knowledge. Many serious games involve rewards and competition towards motivating users to engage in training and skills development activities. Furthermore, serious games are extremely efficient when it comes to training entire groups of learners in settings like collaborative work and team projects.
Serious gaming is already used in various sectors, including the education, healthcare, hospitality, marketing, and manufacturing industries. Specifically:
Serious games are not like every other video or mobile game. They come with some characteristics that set them apart from conventional gaming experiences. Specifically:
The gaming industry has always had a two-way relationship with IT technologies. On the one hand, innovative games have driven technological development in areas like Graphical User Interfaces (GUIs), video streaming and Augmented Reality (AR). On the other hand, the development of innovative technologies like Virtual Reality (VR), Mixed Reality (MR), high-speed networks, and cloud computing is enabling unprecedented advances in gaming technology and applications. This symbiotic relationship between technology and gaming is an inspiration for serious games stakeholders, while driving the main trends that define the present and future of serious games. Specifically, the future of serious games is influenced by the following technological advances:
During the last couple of years many enterprises have leveraged the benefits of serious gaming in their business. Managers all over the world are increasingly dispelling the hype and taking advantage of the business benefits of serious games. This is the reason why it is really time for more CIOs to get serious about business gaming.
Serious Games as an Enterprise Productivity Tool
Business Simulations: Fun Games or Productivity Tool?
Gaming Industry: The rise of Cloud Gaming and Amazon’s Plans
Offshore Game Development: Drivers and Implications
The Potential of Big Data in the Telecom Infrastructure Industry
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive | <urn:uuid:514e5141-d823-4a4a-bbf2-ed8dfe90b68f> | CC-MAIN-2022-40 | https://www.itexchangeweb.com/blog/business-games-when-cios-get-serious-about-the-gaming-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00776.warc.gz | en | 0.935187 | 788 | 2.78125 | 3 |
Science is bearing out what we all know to be true from our own experience: Noise levels in the workplace are a notable deterrent to productivity. Independent studies of business environments that employ some method of “sound masking” reported productivity gains of 8% to 38%, stress reduction of up to 27%, and job satisfaction increases of 125% to 174%, according to an audio industry report. And according to Scientific American, even low-level background noise disrupts concentration and produced damaging effects on the brain, e.g.:
“Several studies have indicated that stress resulting from ongoing white noise can induce the release of cortisol, a hormone that helps to restore homeostasis in the body after a bad experience. Excess cortisol impairs function in the prefrontal cortex—an emotional learning center that helps to regulate ‘executive’ functions such as planning, reasoning and impulse control.”
As per findings by the National Institute for Occupational Safety and Health, ambient noise also affects people’s health by increasing general stress levels and aggravating sensitive conditions such as high blood pressure, coronary disease, peptic ulcers and migraine headaches. Continued exposure does not lead to habituation; in fact, the effects worsen.
Unfortunately, this environment is the definition of the prevailing, open workplace—the same design that accounts for billions of dollars’ worth of office furnishing purchases per year, as reported in Fortune magazine. It’s the most prevalent type of workplace layout in the marketplace. That leaves the typical employer with a problem: The modern office environment is inadvertently counterproductive, even unhealthy.
Without causing a revolution in office design, how should companies combat these deterrents, especially in long-established spaces? Audio headset technology is an often overlooked element of the workplace infrastructure, yet it offers tremendous control over the level of distraction and ambient noise that impedes the communication and collaboration that an open workspace was ironically conceived to deliver. Too many unified communication deployments feature low-end headset technology in order to contain costs, unaware that superior audio technologies exist at competitive price points. The decision to purchase low-quality audio solutions is one that many technology resellers—and their business owner customers—are making unnecessarily, and to their own detriment.
High-end headsets, especially those with superior noise-canceling capabilities and other productivity-enhancing features, are among the few technologies that can mitigate the office noise that causes this documented reduction in employee performance. Technologies exist that can modulate distractions not only from voices and general noise escalation, but also that reduce sounds right down to crinkling pages and keyboard noises. In addition, a superior audio headset will offer design elements that extend much longer-wearing comfort to users in a call center or office environment, reducing the physical duress that accelerates fatigue and impedes the wearer’s ability to focus.
When making strategic audio purchases, both business owners and the integrators who sell these technologies need to factor in the less obvious but vital costs of sub-par headsets, e.g., loss of performance, unsatisfactory customer service delivery, and lower job satisfaction. These are the unfortunate byproducts of the elevated stress, ongoing physical irritation and reduced concentration that occurs in open-space environments without the aid of premium audio.
Another factor of the modern office environment is the trend toward a mobile workforce. Employees on-the-go often navigate challenging surroundings varying from loud subways and other public transportation, to windy outdoor streets and moving vehicles. Superior wireless or Bluetooth® headsets have been engineered to address these problems, including technologies that utilize multiple directional microphones. Sophisticated, integrated applications can modulate between these multiple inputs and utilize whichever channel produces the most intelligible sound for the listener.
Even in open indoor spaces, premium headsets offer dongle combinations that empower workers to travel from place to place within their environments, sometimes up to 590 feet, depending upon the solution. This allows them to retrieve documentation, confer with colleagues in other areas, or simply vary their surroundings during a call. Such freedom not only accommodates more fluid collaboration—which an open-space environment was conceived to promote—but alleviates the health disadvantages of being a sedentary worker. We’ve all heard the recent media reports. Sitting for hours at a time has been likened to smoking as a deterrent one’s physical condition.
The adage “you get what you pay for” is therefore not limited to the consumer sector. The conclusion is the same, whether you are a technology integrator or a business owner who wants to maximize both the workspace and the workforce: Superior audio technology must be more seriously considered in telecommunications deployments. It puts control of the modern office environment back into the hands of the people who manage and work in these spaces, making the open-office environment function as the collaborative and versatile setting it was intended—and helping technology resellers deliver more effective overall solutions.
Download this article [PDF], The Sound of Productivity: Improving the Workplace Through Superior Audio | <urn:uuid:8e7d23be-980e-4d5a-822a-4ef45e3887cc> | CC-MAIN-2022-40 | https://blog.contactcenterpipeline.com/2016/09/the-sound-of-productivity-improving-the-workplace-through-superior-audio/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00776.warc.gz | en | 0.932921 | 1,018 | 2.625 | 3 |
All businesses–from small, family-owned to multinational corporations–can be targets of ransomware attacks. According to the National Cybersecurity Alliance, ransomware attacks impacted 65,000 U.S. organizations in 2020.
Global costs of ransomware attacks totaled $20 billion in 2021. Companies are asking what they can do to protect themselves, and if ransomware protection as a service is the right solution for them.
What Is Ransomware and How Can It Be Prevented?
Ransomware attacks hold your company’s data hostage until you pay to have access returned or prevent the attacker from sharing sensitive information.
So how does a ransomware attack work? The attacker sends a link containing malware, usually through an email or social media message. If one of your employees opens the link, it gives the malware access to your file system, and it encrypts your data. Once encrypted, you lose access to your data, and the attacker can share it with whoever they want.
What’s the Impact of a Ransomware Attack?
The average cost of a ransomware attack is $3.8 million . Not only do companies have to pay the ransom to regain access to their data, but they can also suffer a loss in productivity during the attack. Furthermore, if an attacker exposes sensitive information like health records or financial information, it can shatter trust in that company.
With the cost of an attack being so high, do companies survive ransomware attacks? Sixty percent of small and mid-sized businesses don’t. While recovery is possible, it’s best to have a protection plan in place.
Can You Stop a Ransomware Attack?
Being targeted by a ransomware attack is not a matter of if, but when. While it is possible to reduce the risks of a cybersecurity breach on your own, you can get the best results by working with a company like InterVision that specializes in Ransomware Protection as a Service™ (RPaaS™).
Can My Antivirus Detect Ransomware?
Yes and no. The issue with most antivirus software is that it works to stop threats that it recognizes, but it is often unable to detect and stop newer threats.
Why Is Ransomware Not Detected by My Antivirus Software?
Antivirus software that comes with your operating system, such as Windows 10, has slower update times that have trouble keeping up with how quickly ransomware changes. Because of this, these programs don’t even know what they’re supposed to be preventing by the time ransomware takes hold of your files.
While antivirus software is an essential component of your overall cybersecurity plan, it works best when paired with malware protection software that is capable of real-time threat detection.
Ransomware Incident Response Playbook
Cybersecurity threats may seem daunting if you’re not an IT specialist, but you can start strengthening your protection against ransomware. The first step is understanding your current risk by following this ransomware protection checklist:
- Does my operating system come with antivirus software? Is it activated?
- Are my employees trained to identify and avoid suspicious emails, social media messages, and websites that may contain harmful links?
Some companies can achieve the basics on their own. However, when it comes to the kind of prevention and recovery plans that bring peace of mind, working with professionals, like the team at InterVision, yields the best results.
- Do we use security information and event management (SIEM) technology to identify and analyze threats in real-time?
- Do we limit access to sensitive information so if malware gains access to some of our files it doesn’t gain access to all of them?
- Are our private messages and sensitive information encrypted to make it harder for ransomware to gain access?
- Do we use disaster recovery as a service (DRaaS) to back up our files to a third-party location so we can still function even during a ransomware attack?
If your ransomware prevention plan doesn’t check all of these boxes, then InterVision’s RPaaS could be what you need to keep your data protected.
What Do Ransomware Protection Companies Do?
A good ransomware protection company combines prevention and recovery methods to protect your business from costly ransomware attacks. At InterVision, we employ a holistic approach to mitigate risks from all angles.
InterVision takes a comprehensive approach to prevent, detect and recover your business from a ransomware attack. From the Security Operations as a Service (SOaaS) service that monitors and warns of threat activity to stop attacks before they happen, to ongoing advisory and assistance to support the security processes and risk mitigation via a virtual CISO (vCISO) to full managed replication and recovery processes from service disruption, InterVision’s RPaaS solution covers everything. And then some.
Contact us to get started with full ransomware protection. | <urn:uuid:9aaef4b5-bb44-474b-a976-e8a0c8dc5187> | CC-MAIN-2022-40 | https://intervision.com/blog-what-can-companies-do-to-prevent-ransomware-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00776.warc.gz | en | 0.916922 | 1,012 | 2.703125 | 3 |
Physical or cyber, security is one of the most essential concerns for chemical industry. In this article, we will take a closer look at the cybersecurity requirements. Keep reading to learn more!
With the advancements in the technology and Internet of Things, most processes related to the production, shipment and storage of chemicals heavily rely on the automation and cyber solutions. While making our lives easier and processes of production faster, such a degree of automation also poses some serious vulnerabilities. In this article, we will discuss how you can make sure that your chemical plant, factory, storage unit or shipment service is safe.
What is Cyber Security?
The term cybersecurity refers to the practices and processes that aim to protect computer networks, devices and other cyber assets from data leakages, cyber attacks and/or unauthorized access.
Cyber security practices can be applied to a wide range of areas including but not limited to the military, commerce, law enforcement, infrastructure, intelligence, judicial, interior and information systems. As the technology develops and internet technologies become more and more integrated into our business processes and daily lives, the application area of cyber security grows significantly.
In order to make sure that your organization, data and assets are safe from hackers, cyber attackers and various other malicious agents, cyber security employs numerous different methods borrowed from many disciplines like computer science, criminology, cryptology and information systems.
The aims of cyber security practices can be categorized under availability, confidentiality, integrity, authentication and nonrepudiation. Parallel to these goals and the vulnerabilities of your organization, unique cyber security measures can be designed and implemented.
What are cyber attacks?
A cyber attack can be defined as an attempt to gain unauthorized access to steal, expose, alter, destroy sensitive information. In order to attain their goal, cyber attackers can employ various different methods including social engineering, catfishing, denial of service attacks, malware and more.
Why does your organization need cyber security?
If your organization is part of chemical industry, cyber attacks can seriously harm it in different ways. For instance, a cyber attack can cause:
What can be done to keep your organization safe?
If you want to keep your organization safe from cyber threats, you should consider employing tailor made solutions like SIEM or SOAR. Contact us to learn more about how SIEM and SOAR can help you enhance the security posture of your organization and how you can implement an automated security solution for the prevention of potential losses.
Data breaches and data loss have been the worst nightmares of organizations. That is why being able to act proactively and ensure the... | <urn:uuid:d6ceb37c-89d8-4c9f-8c32-e3049e0cefb5> | CC-MAIN-2022-40 | https://www.logsign.com/blog/cyber-security-for-chemical-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00776.warc.gz | en | 0.931717 | 531 | 2.75 | 3 |
The Internet Exchange Platform is a crucial part of the technical infrastructure that connects networks together. These include ISPs, mobile operators and content delivery networks such as Google Cloud Services or Facebook with their own global data centers in order to provide fast speeds when downloading large files from websites across different countries without any delay.
How Internet Exchange Points work?
Internet Exchange Points are a cost-saving alternative to sending local Internet traffic abroad. They help save money by providing shorter, more direct routes for your network’s data and improve customer satisfaction because you don’t have the hassle of having international links in between each point where it crosses over other networks’ borders.
The IXP is a vital, interconnected network that functions as the backbone for many other networks. It’s made up of switches and routers which work together to send information from one place to another; servers store this data while IT experts manage them (and sometimes even cool off with water balloons). The success or failure isn’t just dependent on technical know-how: we also need people who believe in human connection – having strong relationships among ourselves will make these technological connections stronger too.
Internet Exchange Points
The Internet has given us all an abundance of opportunities that were not possible before it existed. One such opportunity is the ability to connect with other people and companies by simply using a computer, phone, or any device connected online for communication purposes. However, this convenience comes at great cost in terms of speed which may cause slowdowns when transferring files over long distances without high-quality connections available locally – but there’s hope! Networks called IXPs to work as border routers between different locations around the globe so they can exchange internet traffic more efficiently than if everyone just had their own connection hub inside one country alone (like China). To connect to the internet exchange point, one would need a router with an Ethernet cable.
With the world of networking constantly changing, it is important to be able not only to know how networks work but also what they will look like in years from now. With this knowledge at hand, you can create custom solutions for your organization that is tailored solely towards meeting all requirements today and tomorrow too.
The most efficient way to reduce the distance data has from its source and destination is through internet exchange points. Placing a point of presence next door will cut this by half, but it must still travel 400 miles or so before reaching either party’s network connection for them both in-room services with reduced costs.
To learn more about computer technology, click here. | <urn:uuid:3ac9e6b6-fad0-4c5d-bd03-3f97198a593c> | CC-MAIN-2022-40 | https://www.akibia.com/how-to-directly-connect-to-internet-exchange-point/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00176.warc.gz | en | 0.945722 | 518 | 2.671875 | 3 |
When you start diving into this subject, you’ll see there is a lot to consider to ensure you’re setting yourself up for the best possible print or digital representation of an image. DPI, PPI, Resolution, Optimized DPI…. Then you start throwing math and calculations in and we all start to go a little fuzzy-eyed. Without making the whole thing way too complicated, we’re going to talk about some of the terminology and general basics so you can be a more educated consumer. It might just affect the way you print or even share online.
DPI (Dots Per Inch)
This is the number of dots in a printed inch. Printers print by putting ink or toner onto paper; typically by either spraying tiny drops of ink, or melting dots of toner against the paper. Basically, the more dots you can squeeze into that inch, the sharper that image will look to you and the more detail you will see.
PPI (Pixels Per Inch)
Basically PPI is the counterpart to DPI, but think of this in terms of the digital world. When you’re looking at your an image on your computer monitor, you’re seeing pixels rather than printed dots on paper.
When looking at printers and printer specifications, sometimes you’ll see this term (or similar ones) used by the manufacturers. What it means is that the print-head has optimized the placement of those dots on the paper… essentially layering dots on the same part of the page multiple times. What results is a richer image, but the trade-off is in the ink or toner taken to print and the time spent by the printer to provide you with the end result. What I mean is that there’s a trade off somewhere in this process, so it’s something to consider when you’re looking at it as an option for yourself or your business.
Often, this term is used interchangeably with DPI or PPI, but they are different. Resolution is the measure of pixels in the display, (think about the dimensions of the image you’re seeing; width x height). Higher resolutions typically mean more detail in the image, and thus you’d see a higher resolution image correlate with a higher file size. Of course, this is in terms of the digital world… so let’s break this down a bit for print.
Consider for a moment that you have a digital photo you want to print. It was originally shot at 72dpi and 3008pxl by 2000pxl. The goal is to get this printed at a reasonable size, and keep it looking great. If we were to just print this as-is, the resulting image would be huge; 41.7” x 27.8”. But we’d much rather print this at photo-album-size; 6”x4”. What we see when we just scale down the size of the image but don’t adjust that dpi is a pretty significant loss to the image’s resolution; from the original 3008×2000 to 432×288. Plus, most printing companies require the digital photo you’re sending them for printing to be at least 300dpi for their machines. So, if we change that dpi and then set the size to 6″x4″, the resulting image resolution is 1800×1200; a pretty significant difference from a roughly 14% decrease in resolution to only 60% loss. Print those side by side at your 6×4 inch size and you’ll see the difference quite easily.
If you only take away one thing from this article, remember this… your resolution works in conjunction with the dpi. If you need to change either the dpi or the resolution on the source file for some reason, then know the other side will have to give a bit. You can always go down in resolution, but you can’t go up without a loss of some kind.
If you’re looking for more info on additional factors to print quality, you might want to check out these articles on inkjet printers or toner cartridges as well. And as always, if you have any questions please feel free to leave us a comment or drop us a line. | <urn:uuid:8ffb9233-c211-4e91-81c7-8e8bdd84a0a2> | CC-MAIN-2022-40 | https://www.copierguide.com/help-advice/dpi-rabbit-hole/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00176.warc.gz | en | 0.930444 | 906 | 2.84375 | 3 |
When considering artificial intelligence and machine learning systems, humans must always be “the final decision-maker” — especially when used in the workplace.
Copyright by technical.ly
In its annual report , the AI Now Institute , an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people’s emotions in certain cases.
Specifically, the researchers said affect recognition technology , also called emotion recognition technology, should not be used in decisions that “impact people’s lives and access to opportunities,” such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions.
What is this technology, which is already being used and marketed, and why is it raising concerns?
Outgrowth of facial recognition
Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003 . Computers have been able to understand emotion even longer . This latest technology relies on the data-centric techniques known as “machine learning,” algorithms that process data to “learn” how to make decisions, to accomplish even more accurate affect recognition. The challenge of reading emotions
The challenge of reading emotions
Researchers are always looking to do new things by building on what has been done before. Emotion recognition is enticing because, somehow, we as humans can accomplish this relatively well from even an early age, and yet capably replicating that human skill using computer vision is still challenging. While it’s possible to do some pretty remarkable things with images, such as stylize a photo to make it look as if it were drawn by a famous artist and even create photo-realistic faces — not to mention create so-called deepfakes — the ability to infer properties such as human emotions from a real image has always been of interest for researchers.
Emotions are difficult because they tend to depend on context. For instance, when someone is concentrating on something it might appear that they’re simply thinking. Facial recognition has come a long way using machine learning, but identifying a person’s emotional state based purely on looking at a person’s face is missing key information. Emotions are expressed not only through a person’s expression but also where they are and what they’re doing. These contextual cues are difficult to feed into even modern machine learning algorithms. To address this, there are active efforts to augment artificial intelligence techniques to consider context, not just for emotion recognition but all kinds of applications.
Reading employee emotions
The report released by AI Now sheds light on some ways in which AI is being applied to the workforce in order to evaluate worker productivity and even as early as at the interview stage. Analyzing footage from interviews, especially for remote job-seekers, is already underway. If managers can get a sense of their subordinates’ emotions from interview to evaluation, decision-making regarding other employment matters such as raises, promotions or assignments might end up being influenced by that information. But there are many other ways that this technology could be used. […] | <urn:uuid:4b5c9564-0e32-47d2-af0e-791dc2b8a28c> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/01/18/ai-can-now-read-emotions-should-it-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00176.warc.gz | en | 0.961444 | 655 | 3.328125 | 3 |
Machine Learning Challenges in the implementation of Industrial Internet of Things
We are living in the era of 4th Industrial revolution – the evolution which is based on extreme automation of machine to machine communication, not only just the communication but way beyond. Machines can understand each other, negotiate with each other, sign and execute the contracts with each other. They can predict each other’s behavior, and able to sort of establish a social network of machines. This type of network is self-sustaining and can run without much intervention of humans. At the heart of this exciting 4th revolution are some of the advanced technologies at work. Artificial Intelligence, Machine Learning, Sensing, Internet of Things, Big Data, Analytics, Cloud Implementation and Edge computing to name a few. It is amazing to see and experience, how this complex mesh of technologies work together to produce incredible outcomes. The results promise to significantly cut the cost of production, improve overall productivity, improve product and process quality, reduce downtime, etc.
Machine learning is an important part of this whole technology implementation.
The core to machine learning implementation is to maximize the outcome of available raw data. There is a lot of data available to us. Around the work which is currently going on. Every action is producing data. Machine learning is about analyzing this available data, whether it is historically available data or a constant stream of data and generating information that is in an understandable form for the outside world or downstream interfaces.
Implementation of a successful Machine Learning Algorithm that is working in the industrial environment, and producing trustable results is not very easy. It requires an unusual combination of domain knowledge, problem-solving mindset, and the people who are good in cracking numbers and love statistics.
Challenges of Machine Learning Implementations
Some of the challenges of Machine Learning implementations are listed here and should be kept in mind while designing the solution.
Selection of right algorithm
There are tens of widely popular algorithms available for ML implementation. Though algorithms can work in any generic conditions, there are specific guidelines available about which algorithm would work best under which circumstances. Improper selection of algorithm can produce garbage output after months of effort – leading to massive loss of the entire effort and pushing the target timelines further.
Selection of the right set of data
As they say – Garbage in will produce Garbage out, which is very well suited for the range of data set for machine learning. The quality, amount, preparation, and selection of data are critical to the success of a machine learning solution. Data selection may be impacted by Bias. It is important to avoid selection bias and select the data which is entirely representative of the cases.
Historical data is very messy and often consists of missing values, valueless values, outliers and so on. Parsing, cleaning and preprocessing of such data can be a tedious job. Feature properties and value ranges have to studies and techniques like feature scaling need to be applied to prevent certain features from dominating the entire model.
Easy and more appropriate models are the ones used in Supervised ML algorithms. Unsupervised ML algorithm selection and implementation is very tedious and lengthy process sometimes requiring several unsuccessful iterations. Supervised ML algorithms require data labeling. Data labeling is a manually intensive task – but at the same time can’t be outsourced just like that. The classic case, for example, is health care systems. For the predictive diagnosis to work, the available medical data has to be labeled. The labeling requires constant input from medical experts/doctors. However, specialized medical experts /doctors view this activity of labeling a simple waste of their time.
There are a lot of other challenges. Managing model versions, managing data versions, reproducing the models, etc. Good ML skills are in scarcity, and if there are frequent changes in the teams working on complex implementation, numerous personnel changes can be a nightmare. Machine learning process is a constantly evolving process – systems and their features are changing at regular intervals which need to be incorporated in the machine-learning setup. When there is a time to tweak the systems, teams often find that earlier model, features, and datasets were not documented appropriately. The team which implemented the system moved on. In the absence of key personnel and the documentation, it becomes a nightmare for the current team to maintain the system.
Further reading: Machine Learning Algorithms in Autonomous Driving
The article was written for IIoT World by Anil Gupta, Co-founder of Magnos Technologies LLP. He has about 23 years of experience in Connected Cars, Connected Devices, Embedded software, Automotive Infotainment, Telematics, GIS, Energy, and Telecom domain. | <urn:uuid:392d66ec-4726-4143-b748-6777160db591> | CC-MAIN-2022-40 | https://www.iiot-world.com/artificial-intelligence-ml/artificial-intelligence/machine-learning-challenges-in-the-implementation-of-industrial-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00176.warc.gz | en | 0.929384 | 965 | 2.734375 | 3 |
When preparing for security exams such as Security+ or SSCP, you should know the differences between a CAC, PIV, and smart card. These are also known as a common access card (CAC), a personal identity verification (PIV) card, and a smart card. All three are used for authentication. More specifically, each of them are in the Something You Have factor of authentication.
Users prove their identity with authentication and there are three factors of authentication. They are commonly known as:
- Something you know, such as a password or PIN
- Something you have, such as a smart card, CAC, PIV, or RSA token
- Something you are, using biometrics
Pass the Security+ exam the first time you take it.
CompTIA Security+: Get Certified Get Ahead: SY0-401 Study Guide
A smart card is a credit card sized card that has an embedded microchip and one or more certificates. The information on the card identifies the user and the certificate also includes the user’s private key used for asymmetric cryptography.
Users are often required to enter a personal identification number (PIN) along with the smart card. Using a smart card (something you have) and a PIN (something you know) provides multifactor authentication. Combining two or more factors of authentication is more secure than using only a single factor.
Both a CAC and PIV provide the same benefits of a smart card, but also include photo identification.
A common access card (CAC) is a smart card used by employees and other personnel in the United States Department of Defense (DoD). A CAC includes a picture of the user along with other information such as their name. DoD employees wear the CAC as a badge and can show it to guards to prove their identity. They can also use it as a smart card to log onto systems.
A personal identity verification (PIV) card is also a specialized type of smart card used by personnel in United States federal agencies. Just as a CAC does, the PIV card includes a picture of the user along with their name. A PIV can be used for visual verification of users, and then as a smart card when users log onto their computer.
Benefits of Smart Card, CAC, and PIV
Each of these provide some specific benefits worth emphasizing. They are:
- Authentication. A basic purpose is to allow users to prove their identity.
- Confidentiality. The certificate can be used with asymmetric cryptography to ensure confidentiality of data.
- Integrity. The certificate can also be used with digital signatures and provide integrity for the message.
- Non-repudiation. In addition to providing integrity, a digital signature also provides integrity.
Security+ Practice Test Question
Q. Which of the following includes a photo and can be used for identification?
Master Security+ Performance Based Questions Video
Security+ Practice Question Answer: D
A common access card (CAC) includes a picture used for identification and can also be used as a smart card. While not included in the answers, a personal identity verification (PIV) card also includes a picture and can be used as a smart card. A media access control (MAC) address is assigned to a network interface card or wireless network adapter. Discretionary access control (DAC) is an access control model; Microsoft’s NTFS uses DAC. Role based access control (RBAC) is an access control model; RBAC uses roles or groups and users are placed into a role or group based on their assigned jobs.
Other Security+ Study Resources
- Security+ blogs organized by categories
- Security+ blogs with free practice test questions
- Security+ blogs on new performance-based questions
- Mobile Apps: Apps for mobile devices running iOS or Android
- Audio Files: Learn by listening with over 6 hours of audio on Security+ topics
- Flashcards: 494 Security+ glossary flashcards, 222 Security+ acronyms flashcards and 223 Remember This slides
- Quality Practice Test Questions: Over 300 quality Security+ practice test questions with full explanations
- Full Security+ Study Packages: Quality practice test questions, audio, and Flashcards | <urn:uuid:ebd651c4-26df-4104-b189-ed36c9489ed6> | CC-MAIN-2022-40 | https://blogs.getcertifiedgetahead.com/cac-piv-smart-card/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00176.warc.gz | en | 0.93172 | 902 | 3.234375 | 3 |
Further use of emerging technologies marks the path forward for the intelligence community to uncover more compelling evidence of unidentified flying objects (UFO), and to learn more about their meaning and intent, former Federal government officials said at NextGov’s technology summit on August 17.
Lue Elizondo, former director of the Advanced Aerospace Threat Identification Program (ATTIP) at the Department of Defense (DoD), explained that gathering hard data from a variety of sensors has provided a crucial bridge in the government’s approach to studying UFOs – also referred to as unidentified aerial phenomena (UAP).
In addition to eyewitness UFO/UAP testimony, Elizondo said, Federal government personnel can utilize ground-based sensors, sea-based sensors, airborne sensors, and now space-based sensors to provide compelling reports to Congress on the issue.
“On top of eyewitness testimonies from trained observers, we have data from sensors as another layer of information that is all corroborating the same information at the same time and place under the same circumstances. That is very compelling,” said Elizondo.
A recent preliminary assessment from the Office of the Director of National Intelligence on Unidentified Aerial Phenomena reported that 80 out of 144 sight reports involved observation with multiple sensors that captured enough real data to allow initial assessments.
While Federal agencies have compiled enough data to present compelling evidence of UAPs, former Senate Majority Leader Harry Reid, D-Nev., said at the August 17 event that the government must continue working on the most important questions – what are UAPs, and what do they represent.
According to Reid, it would be negligent – and even become legislative malpractice – if the Federal government does not go further to find out more about UAPs. That’s mainly because UAPs are a matter of national security concern.
“We have evidence of these multiple sightings. But I think it’s essential that we continue trying to find out what these things are,” said Reid.
For a subject once relegated mainly to the province of science fiction, the Federal government has come a long way in recent months in acknowledging the validity of UAP sighting data. And a June 2020 vote by the Senate Intelligence Committee requires U.S. intelligence agencies and the DoD to compile an unclassified report covering all data collected on UAPS.
Elizondo explained that transparency on the issue by the Federal government – not just within the classified space but also with the public – is vital because democracy cannot work for the interests of the American people if the government is not truthful, especially with national security concerns.
“There’s someone, somewhere with some technology that can come in unimpeded any time they want to at will and undetected over our sensitive facilities,” Elizondo said of UAPs. “That’s a problem.” | <urn:uuid:c2f24c8d-1cde-4888-b8f8-1c938b83f725> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/emerging-tech-key-to-unraveling-ufo-uap-mysteries-officials-say/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00176.warc.gz | en | 0.944478 | 591 | 2.53125 | 3 |
A very simple concept that, implemented correctly, can greatly affect a company’s ability to reliably backup and recover data. The column also explained how there are essentially three ways to implement a disk to disk to tape backup: from the host, from the SAN (in the form of an appliance) or within a tape device. In last month’s column I discussed the host-based solutions. This column will take a closer look at the appliance-based option.
Virtual Tape Library
Backup solutions usually consist of three pieces: backup server, backup client and a tape device. With the host-based solution, the backup server needs extra software (to write to disk instead of tape) and a large pool of disk-storage. With the appliance-based solution, a server is added and is either bundled with disk, or a disk must be purchased separately.
The backup software remains the same. An appliance-based solution is generically called a virtual tape library (VTL). The VTL can emulate many different tape libraries that the backup software recognizes and can be integrated into the existing infrastructure seamlessly.
Virtual tape is a concept that has been in the data center for many years. Originally introduced for IBM mainframes, it is now exploding in the open systems arena. VTLs are logically just like physical tape libraries: they logically appear and operate as physical tape devices including virtual tape drives, data cartridges, tape slots, barcode labels and robotic arms.
A VTL is physically a highly-intelligent optimized disk based storage appliance. Because a VTL completely emulates a standard library, the introduction of virtual tape is seamless and transparent to existing tape backup/recovery applications.
Traditional tape devices have a few problems: they are slow and the only way to solve that problem is to add more and more tape drives. Tape library robotics are prone to failure and the tape media itself is “delicate” and must be stored in a conditioned, secure environment.
To increase backup performance, backups can be multiplexed across multiple drives and tapes. This increases the odds of a failed backup due to a bad tape, a faulty drive or malfunctioning robotics.
* (See story correction below.)
Restores from tape are also time-consuming. Consider trying to recover a file that was part of a 5-tape multiplexed backup. Each of the five tapes must be located in the library and loaded into tape drives. If the drives have tapes in them already, the tapes must be removed from the drives and moved to free slots before another tape may be loaded. Once the tapes are loaded, they must be advanced to where the file is and then, finally, the file can be read from tape. It can take many minutes just to start the recovery. If the tapes are not in the library, it can take many hours to recover a single file.
On the plus side, tape-based solutions are usually considered to be relatively inexpensive. But when the tape media is considered, the cost can skyrocket.
Some studies show that users will buy thirty times the amount of slots worth of tapes during the life of a library. For a medium-sized, 100-slot library, that’s 3000 tapes. LTO-3 tapes are currently in the $100 price range. Add the fact that extensive human intervention is required to manage and maintain a tape solution and they are not very inexpensive. It can cost a lot of money to be able to successfully backup your data 60% of the time.
Comparing the problems associated with tape solutions with a virtual tape solution shows how a VTL can change how a datacenter can be run.
A virtual tape library solution can perform 10-times tape speeds for backups. Speeding up the backups will greatly shrink the backup window which allows servers to be backed up faster. With existing backups finishing quicker, second and third tier servers that have not been backed up in the past may now fit into the backup schedule.
Recoveries are also significantly faster using a VTL (typically much faster than the backup). A single file can be recovered from a VTL faster than most tape libraries can find and load a tape into a drive. Full backups that span multiple tapes (as apposed to multiplexed) will also recover very slowly compared to a VTL since after the data is read from one tape the next tape must be located and loaded compared to a VTL that just keeps streaming data from disk.
All virtual tape loads are immediate so there is virtually no delay when a new “tape” is “loaded”. People who resort to multiplexing backups to increase the performance of the backup are usually shocked to discover that their recovery times will be about twice as long as the backup was.
Most virtual tape library solutions also contain RAID protected storage that has redundant, hot-swap components (drives, power, cooling). Backups that use a VTL rarely fail because of a VTL failure. Recoveries will never fail due to a bad or lost tape.
VTLs are just not prone to the types of failures that a traditional tape library has. For example, a backup to disk will never fail because of a bad tape, broken tape drive or broken robotics.
The initial cost of a tape library can be less than the cost of a VTL. But when a three or five year cost-of-ownership is considered (tape media, failed backups, lost data (due to failed recoveries), management costs, etc.) a VTL will be less expensive.
Also consider the lower cost of backup software. Some backup software is tiered based on the number of tape slots. By configuring a virtual library to have few slots, but very large “tapes”, the software tier can be lowered. For backup software that is tiered on the number of tape drives, configure a virtual library with fewer “drives”. Some backup software solutions are now adding a virtual tape library option which is priced based on the capacity of the library.
Today’s virtual tape libraries range from a customer supplied server with VTL software and separate disk to a completely productized solution where the server, software and disk are all bundled.
There are pros and cons for both extremes.
With an “unbundled” solution, the user gets to purchase each piece separately. The “pieces” include the VTL software, server, disk and potentially the SAN infrastructure. Unfortunately, the user must also purchase separate support agreements for the VTL software, server, disk and SAN infrastructure and each piece must be managed and monitored by the IT staff.
With a bundled solution, all the “pieces” are included, tightly integrated and guaranteed to work together. The solution is managed and monitored as one entity and support is covered by one contract.
With current bundled VTL solutions expanding to a petabyte or more, scaling the solutions is not an issue. Adding an additional VTLs for each petabyte of backup is acceptable for most environments. The only negative with a bundled solution is that it is bundled. Some people just do not like that. When it comes to backup, it is best to keep things very simple so a bundled VTL solution is probably the best bet (there are not very many home-made tape libraries in production so why make your own VTL solution?).
Adding a virtual tape library into an existing tape environment will always improve the reliability of backups and recoveries. Even if a VTL is configured to backup only as fast as an existing library, the increase in performance for day-to-day data recovery makes the investment a no-brainer.
Add the robust data processing capabilities not available in physical tape libraries (ex. replication, single-instance data storage) and a VTL can open the door for tremendous advances in tape backup methodology and revolutionize traditional operations.
If you are not considering a VTL today, you should tomorrow.
Jim McKinstry is senior systems engineer with Engenio Information Technologies, an OEM of storage solutions for IBM, TeraData, Sun and others.
* The following paragraph, which was orginially part of the article, was wrongly attributed to the analyst firm Gartner. Gartner said they never published these numbers: “The analyst firm, Gartner, has reported that almost 50% of all backups are not recoverable in full, and that approximately 60% of all backups fail in general. These failures are mostly associated with tape, drive or robotic failures.” | <urn:uuid:0ea6c4b4-7fc2-42b4-b684-53bbe8d5d6a9> | CC-MAIN-2022-40 | https://cioupdate.com/tips-on-disk-to-disk-backup-part-iii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00176.warc.gz | en | 0.950511 | 1,770 | 2.84375 | 3 |
We have been reading stories about countless IoT devices and services that are benefiting people in a wide variety of ways, such as helping deliver parcels via autonomous drones or robots, detecting and pinpointing the location of gunshots to help fight crime, using facial recognition as a credit card, automatically request maintenance for some soon-to-be-broken city assets, sensing people’s health without visiting a doctor, and cars that drop you off and find parking spots themselves. Yet, when asked which smart city measures have actually been implemented and how today’s technology has helped us make our current urban life more efficient, most will not have a clue.
Probably the most noticeable Smart City trend is the implementation of security cameras in densely populated public spaces. As you have guessed, there is a negative correlation between cameras and crime, which saves cities a lot of money at the end of the day. For example, the crimes prevented in Humboldt Park, Chicago saved the city $4.30 for every dollar spent on the surveillance system. With current developments in artificial intelligence and face recognition systems, 1oT is expecting to see security cameras in public space get even smarter, so if a criminal offence occurs, police get notified automatically.
Another measure that local governments implement is the installation of intelligent street lights. These lights save money by dimming themselves automatically when no one is using the street and use more efficient LED lights, which emit more light while consuming less energy. Street lights can be made intelligent by placing sensors or cameras on them. What makes them even smarter is that these devices can sense what kind of objects are on the street and cast light accordingly (more for pedestrians, less for cars and even less for big trucks). These lights give reliable information about street usage for the cities and local governments.
Another environmentally hot topic is urban waste management. The average American generates 2kg of trash per day. Traditionally, garbage trucks empty garbage bins without checking if they were 20% or 100% full. This wastes time, fuel, adds more traffic to the streets. All in all, it increases the ecological footprint. In recent years, more and more cites have set up smart waste management solutions by adding fill-level sensors to their trash cans. These sensors send info to the central system, which automatically generates routes to waste collectors and tells them which bins need to be emptied and which do not.
While saving money and environment, these systems also give a direct big data analytics input to city councils on people’s behaviour, while helping to plan their next developments and upgrades to existing infrastructure or maintenance work.
Sensors are great tools for cities to learn from our urban behaviour without compromising on privacy. One of the most popular smart city projects is smart parking. More and more downtown streets get filled with sensors that help people find available parking spaces via smartphones, reducing the time looking for parking and possibility of initiating a traffic jam. Smart parking solutions also make paying for parking faster or even totally automate it. In turn, it gives valuable data input for the cities on parking demand and traffic flow. For example, the Westminster area in London is filled with parking sensors and an app directs drivers to empty spaces.
Previously listed measures and systems are implemented in many cities across the globe. However, there have been many city specific cases when implementing smart city solutions. One of these solutions is Street Bump. Many cities struggle with bad road conditions and even if there are means to fix them, it takes a long time for the authorities to locate all potholes and bad parts of the road. Boston’s solution is an app that lets volunteers collect road data while they’re driving and every time a car hits a pothole or street bump, it gets detected by phone’s sensors and is sent directly to the city government.
Augmented reality. Something that is constantly talked about in the recent years, has made its way into our cities. Fort Lauderdale in Florida has implemented augmented reality into their city: once tourists download the GoLauderdale app and point their phone towards a landmark, it will automatically give them info about it. Google is also working on augmented reality for city space and is going to implement it via Google Lens soon, as they announced at Google I/O 2017.
Smart traffic lights. The city of Barcelona has installed smart traffic lights that transmit real time data and regulate traffic flow accordingly. This data is also forwarded to city buses, which can reroute in order to stay on schedule. Furthermore, in case of emergencies, traffic lights will turn green for emergency services. This is one of the greatest deployment of Smart City solutions as every second matters for emergency services.
Smart Street projects. As making a city “smart” takes a lot of time and finances, it’s logical to start with smaller projects. One example would be Kalaranna SmartStreet in Tallinn, Estonia. This street is filled with sensors that people can check in real time: how many cars and pedestrians used the street in the last hour, what’s the average noise level on the street, what’s the temperature on the street, how full are the trashcans, how much light falls on the street at given moment and how much do the new streetlights save money opposed to other streetlights in the city. Hopefully, pilot projects like this turn out successful and these new systems will be used on a broader scale.
When looking at a more technical level we see that most of the innovative IoT Smary City systems rely on good old cellular connectivity. It is available anywhere in populated areas, which makes it a reliable option for outdoor use. As current 2G and 3G networks are gradually taken out of service to be replaced by their successor LTE, 1oT team is advising companies to use LTE cellular chips in their devices. Read more about cellular connectivity for IoT here.
Looking ahead, we see that the future of connecting Smart City IoT devices will be NarrowBand IoT, which is designed for Internet of Things devices that use really small amounts of data. Current cellular networks are not that well optimised for applications that only transmit small amounts of infrequent data and the existing cellular standards do not support power saving capabilities. You can read more about NB IoT here.
As the popularity of Smart City devices enhances, the need for device management grows greater. Cities and IoT device makes do not solely wish to connect their sensors and devices to the internet, but need to automatically and continuously control and monitor them. This is where 1oT comes into play. With our hassle-free SIM management platform, called 1oT Terminal, you get to control your SIMs real time, set up notifications and trigger actions when something goes south, and integrate all functionalities to your own systems and applications.
A sneak peek into our 1oT Terminal is found here. In case of any question, please feel free to contact us at hello[at]1oT.mobi. We would be happy to help! | <urn:uuid:dd0a2c87-b8cd-41ad-9461-0e62c16d06db> | CC-MAIN-2022-40 | https://1ot.com/resources/blog/how-iot-has-transformed-cities-around-us | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00376.warc.gz | en | 0.947134 | 1,422 | 3.296875 | 3 |
Résumé du cours
In many professional environments today, people work collaboratively in teams. Information technology and applications facilitate this by allowing people to easily share, access, edit, and save information. Microsoft SharePoint is a platform specifically designed to facilitate collaboration, allowing people to use familiar applications and web-based tools to create, access, store, and track documents and data in a central location. In this course, students will learn about and use SharePoint to access, store, share, and collaborate with information and documents. SharePoint is a complex platform with many features and capabilities. A strong understanding of those features and capabilities will allow users to work more efficiently and effectively with SharePoint, and with the documents and data stored in SharePoint. Furthermore, effective use of the Modern UI and Office 365 integrations will streamline tasks and facilitate collaboration with colleagues in other Office 365 and third-party apps.
Note: The skills covered in this course are appropriate both for Site Users who work in environments with SharePoint Online servers and for those using on-premises SharePoint servers in Modern Experience mode. This course covers the comprehensive suite of SharePoint online features and functions, which may go beyond what is available if the production environment is limited to SharePoint 2019 servers. How the environment is customized and configured will also affect how production sites compare to the sample sites shown in class.
A qui s'adresse cette formation
This course is designed for Microsoft Windows and Microsoft Office users who are transitioning to a SharePoint environment, and who need to access information from and collaborate with team members within Microsoft SharePoint (using either a Microsoft SharePoint Online or a Microsoft SharePoint 2019 server).
To ensure success in this course, students should have basic end-user skills with a current version of Microsoft Windows for the desktop and any current version of Microsoft Office desktop software, plus basic competence with Internet browsing.
In this course, students will effectively utilize resources on a typical SharePoint team and communication sites while performing normal business tasks. Upon completion, students will be able to:
- Interact with SharePoint sites.
- Work with documents, content, and lists.
- Share, follow, and collaborate on content.
- Interact with Office 365 files via SharePoint.
- Manage Office 365 apps with SharePoint.
Outline: Microsoft SharePoint Modern Experience: Site User (91095)
Module 1: Interacting with SharePoint Sites
- Access SharePoint Sites
- Navigate a SharePoint Site
- Access SharePoint from Your Mobile Device
Module 2: Working with Documents, Content, and Lists
- Store, Access, and Modify Documents and Files
- Add and Populate Lists
- Configure List Views, Filters, and Grouping
Module 3: Searching, Sharing, and Following Content
- Configure Your Delve Profile
- Share and Follow Content
- Search for Content
Module 4: Interacting with Office 365 Files
- Synchronize SharePoint Files with OneDrive
- Save and Share Office 365 Documents
- Manage File Versions and Document Recovery
Module 5: Managing Office 365 Apps with SharePoint
- Manage Microsoft Outlook with SharePoint
- Manage Microsoft Teams with SharePoint
- Manage Tasks with Planner and SharePoint | <urn:uuid:401edede-dfc0-4ec0-884e-6e4dc7135952> | CC-MAIN-2022-40 | https://www.fastlanetraining.ca/fr/course/microsoft-91095 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00376.warc.gz | en | 0.8452 | 688 | 2.796875 | 3 |
The Obama administration is unveiling a series of new efforts aimed at boosting computer science education at the K-12 level, including funding commitments from tech companies and philanthropic organizations and pledges from dozens of school districts to expand their curriculum offerings.
Monday marked the beginning of Computer Science Education Week, an occasion the White House observed by hosting an event that included 20 middle-school students from Newark, N.J., on hand to participate in an “Hour of Code” session, an instructional computer-science initiative of the nonprofit group Code.org.
President Obama, First Coder-in-Chief
“This drives home the point that anybody can learn,” Partovi joked. “President Barack Obama is our first coder-in-chief.”
White House Aims to Advance Computer Science Education
The White House touted Monday’s events and the announcements to advance computer science education as an important step forward for an increasingly critical discipline, but one that is still neglected in many K-12 schools.
According to a fact sheet provided by the White House, a significant majority of K-12 schools offers no programming classes, and exactly half of the states don’t credit computer science coursework toward the math and science requirements for a high school diploma.
At the same time, the administration stresses the importance of computer science education in an increasingly tech-oriented economy, citing an estimate that, by 2020, more than half of all jobs in the STEM fields of science, technology, engineering and math will be found in areas related to computer science. The administration also points to a projected shortfall in qualified workers in computer science, anticipating that if current trends hold, 1.4 million computer science jobs will be created in the coming 10 years, but only 400,000 students will graduate with a degree in that field and the requisite skills to fill those positions.
The spate of announcements the White House made on Monday aims to help bridge that gap, including word that more than 60 school districts (among them the seven largest in the country) have pledged to incorporate computer science classes into their course offerings.
Additionally, the administration announced philanthropic commitments in excess of $20 million from the likes of Google, Microsoft, Salesforce.com and several wealthy individuals to fund instruction programs to prepare elementary school teachers to incorporate computer science units in their classes. Code.org, which will host those workshops, aims to reach 1,000 elementary teachers each month, and is committing to provide instruction to an additional 1,000 middle-school and high-school teachers each year.
“A key part of this is the creation of scalable models for professional development for teachers,” says France Córdova, director of the National Science Foundation (NSF), which is to play a key role in the administration’s work to advance computer science education.
The NSF has been working under the long-term goals it set in the CS 10K Project, through which it is seeking to establish rigorous computer science courses in 10,000 schools around the country, taught by 10,000 well-qualified teachers, with the ultimate goal of making CS education available in every school.
Under the auspices of that program, the NSF has announced partnerships with a bevy of nonprofit organizations, including the College Board, which is piloting an advanced placement course called Computer Science Principles that seeks to marry the traditional programming fundamentals with the more creative aspects of the discipline.
In part, that approach supports another key pillar of Monday’s announcement — to broaden interest in computer science among women and other groups that are underrepresented in the discipline.
Classroom Design Matters
One of the new efforts to broaden diversity in computer science courses involves a contest that the USA Science and Engineering Festival is holding to solicit novel design ideas for computer science classrooms, building on research that has indicated a connection between classroom layout and the participation of young women.
“Turns out that the design of the classroom affects how welcoming it seems to girls and women who are thinking about whether or not they’re going to do computer science,” says John Holdren, director of the White House Office of Science and Technology Policy.
“What’s around you makes you feel like you belong or not,” adds U.S. CTO Megan Smith, who says the contest seeks to develop new models for “inclusive physical spaces” in schools. | <urn:uuid:c7c68bdf-63ae-4b48-bcef-c6c5c9c4c44e> | CC-MAIN-2022-40 | https://www.cio.com/article/250899/new-programs-aim-to-boost-computer-science-education.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00376.warc.gz | en | 0.950818 | 952 | 2.6875 | 3 |
Phishing schemes aren’t just for your personal accounts, your business accounts are actually more...
The Kaseya ransomware attack that took place on July 2nd has sent shockwaves throughout the cybersecurity industry and throughout the American government. It is estimated that between 800 and 1,500 companies fell victim to the attack. The FBI has described the attack as a "supply chain ransomware attack leveraging a vulnerability in Kaseya VSA software against multiple MSPs and their customers."
In response to the ransomware attack, the White House recommended businesses adopt a zero-trust security model to prevent future attacks from occurring. The zero trust model helps businesses strengthen vulnerabilities in their security architecture to prevent bad actors from infiltrating a company.
The White House refers to zero trust as “Zero Trust Architecture”; it’s basically a security model in which a company recognizes there are internal and external threats to companies and companies should not have any implicit trust in any part of the security infrastructure. Zero trust is a “trust, but verify” approach to security and it uses real-time data to determine access and system responses.
With zero trusts, endpoints and users are not automatically trusted. This helps directly combat internal threats and people with rogue credentials that want to gain access into a company. Adopting zero trust exponentially strengthens a company’s security because nearly 80% of all attacks involve credential misuse and abuse. The real-time visibility provided by zero trust helps companies recognize and mitigate any security risks internally and externally.
Rigorous authentication: Many companies fall victim to attacks because they have weak or nonexistent authentication systems. With zero trust, all users will have to authenticate and verify their credentials when they access any company resource. When a user accesses a file, device, or application, they will be required to re-authenticate.
Limitation of user privileges: One of the reasons that cyberattacks are devastating is because a hacker can gain access to an entry-level employee’s credentials and use that to access sensitive company data. With zero trust, users’ privileges will only be limited to what they need to perform their job. Any additional access and privileges will be automatically removed to mitigate risks.
Logging: Zero trust has logging built into its security architecture. This means that all file accesses, network calls, emails, and other actions are monitored. When an attack or threat occurs, the company will have all the details needed to track down who is responsible and the methods they used to infiltrate systems. This makes it much easier for companies to detect compromised accounts and act quickly.
To learn more about how you can use zero trust to improve your cybersecurity, talk to an expert here.
Ransomware attacks are constantly making news headlines. However, the stories you hear often focus...
Small businesses are some of the most targeted victims when it comes to cybersecurity attacks.... | <urn:uuid:1a7b5c44-1162-4e66-be42-cc2206f14169> | CC-MAIN-2022-40 | https://blog.christoit.com/zero-trust-a-response-to-the-kaseya-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00376.warc.gz | en | 0.952792 | 590 | 2.578125 | 3 |
Server Types Defined: 5 Different Types of Servers and Their Roles
If your small business doesn’t already have a server, you may wonder whether you really need one. The answer depends on a number of factors that are unique to your business. If your company has a growing remote workforce and needs to secure access and additional storage for email, that’s a good use case for getting a server. This concept is referred to as defining a server’s role which is a single server’s dedicated job to handle certain large data sets. This process can be compared to considering hiring someone for a new role in a busy department. Before making that hefty salary investment, you evaluate the department’s bandwidth to see if resources can be reallocated elsewhere in your organization. If not, you determine what exact skillset is needed now and in the future for that department’s success and find it!
5 Different Types of Servers Explained
Nowadays, plenty of things can be done in the cloud, but a dedicated server allows you to have more control of your own data and workflows. Let’s take a look at the 5 most common roles small businesses assign to their servers.
In server virtualization, a software application is used to divide a single physical server into multiple unique, individualized virtual servers. Each of those virtual servers can run its own independent operating system. A non-virtualized dedicated server typically only uses around 15% of its resources during normal operation. So the key advantage of server virtualization is that you can fully utilize your physical resources without investing in more hardware.
Within the virtual server environment, there are three types of virtualization:
- Full virtualization: Through the use of a hypervisor (software that communicates directly with a physical server’s disk space and CPU), virtual servers are kept separate and the physical server’s resources are monitored and relayed to virtual servers as needed.
- Paravirtual machine model: Instead of virtual units operating individually, the entire network works together as a single cohesive unit.
- Virtualization at the OS level: No hypervisor is required, and all the servers run on the same operating system as the physical server.
Active Directory Server
Active Directory (AD) is a proprietary directory service developed by Microsoft that runs on Windows Server. The Active Directory Domain Services (AD DS) stores all the directories and manages the interaction between the user and the domain. It also verifies access when a user signs in and controls which users have access to specific resources. It even manages group policies to provide different positions within the company with different levels of access.
AD has many security benefits and provides a single point from which the network administrators can manage and secure their resources. The three main benefits of AD DS are:
- Resources and security administration are centralized.
- A single sign-on provides access to network resources anywhere on the server.
- Resource location is simplified and users can easily configure the scope of their searches.
In a multiple-server environment, AD can improve productivity by making it faster and easier to locate necessary resources.
A file server offers an accessible but secure storage place for files. As the central server in a computer network, it provides file access (or parts of them) to authorized, connected users. In this environment, the server administrator creates the rules regarding the specific access rights of users. This means determining which files can be seen or opened by certain users or user groups and defining which data can only be viewed, added, edited or deleted.
When connected to the internet, file servers allow users to:
- Access files both via the local network and through remote access.
- Store and manage files according to their access levels.
- Use the file server as a storehouse for programs that need to be accessed by multiple network participants.
- Backup important files and data to the server.
An exchange server is another Microsoft product that is both a mail server and a calendar server. It only runs on the Windows Operating System and is an effective way to improve productivity by sharing multiple calendars, scheduling meetings between users in different locations and allowing users to move onto the cloud when the time is right. With an exchange server, employees can securely access services like emails, voice mails, instant messaging, video calls and texts from the computing device of their choice, regardless of their location.
The web server is the most common type of server found in businesses today. A web server environment is composed of both software and hardware that leverages HTTP (hypertext transfer protocol) and other protocols to respond to client requests made over the world wide web. Its main job is to deliver web pages that are requested by users on the internet or via an intranet. When a user looks for content on the web server, the software is accessed through its unique domain name, the browser requests it using HTTP; the software accepts that request, finds the content and sends it back to the browser to display for the user.
Choosing the Right Server for Your Business
To choose the right server for your business needs, you need to consider which applications you plan to use then research the server specs to see which ones best fit your needs. Aventis Systems can help you choose the one that is best for your business needs and can even help you build your own server to provide the customization needed to suit your daily operations. With Aventis Systems’
Custom Server Builder, you can get the server configuration perfectly tailored to meet the needs of your business applications and storage requirements. If you have any questions about which server type is right for your business, or how to use the Custom Server Builder, you can give us a call at 1.855.AVENTIS to speak with an IT expert! | <urn:uuid:0419786d-0043-45de-ad95-3af0ca71615d> | CC-MAIN-2022-40 | https://www.aventissystems.com/blog-smb-5-different-types-of-servers-explained-s/610058.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00376.warc.gz | en | 0.920925 | 1,199 | 2.921875 | 3 |
When the data requirements of an organization change, the databases used to store the data must also change. If the data is not reliable and available, the system does not serve the business—rather, it threatens the health of the business. So, we need infallible techniques to manage database changes—and we need techniques that are not just fail-safe but also automated, efficient, and easy to use. Unfortunately, today’s database systems do not make managing database change particularly easy.
Relational databases are created using Data Definition Language (DDL) statements: CREATE, DROP, and ALTER. The CREATE statement is used to create a database object initially, and the DROP statement is used to remove a database object from the system. The ALTER statement is used to make changes to database objects. But many aspects of database objects cannot be changed by using the ALTER statement, instead requiring a complex sequence of DROP and CREATE statements to achieve.
Further exacerbating the problem, the exact specifications for what can and cannot be changed using ALTER differs from DBMS to DBMS. Some of the actions that are most likely to not be supported by ALTER include: moving a table or table space to another database; moving a table from one table space to another; rearranging the order of columns in a table; modifying the data type and length of a column; removing columns from a table; changing the definition of a primary key or a foreign key; adding a column to a table that cannot be null; modifying the columns in a view; changing the columns of an index; and modifying the contents of a trigger.
Database change management can be complicated and therefore requires knowledgeable DBA staff and robust system management software to be handled appropriately.
In some limited cases, it is possible to use ALTER to change the length of certain types of columns. For example, in DB2 and Oracle you can alter a character column to a larger size, but not to a smaller size. Additionally, it may be possible to change a column from one numeric data type to another. DB2 allows the modification of a column’s data type, as long as the change is within the same data type family (numeric to numeric, character to character, or datetime to datetime). For example, it is legal to change a column from SMALLINT to INTEGER using ALTER, but not from SMALLINT to DATE. In general, though, significant changes to the data type and length of a column usually require the table to be dropped and recreated with the new data type and length.
When making changes to a database requires an object to be dropped and recreated, the DBA must cope with the cascading DROP effect. A cascading DROP refers to the effect that occurs when a higher-level database object is dropped: All lower-level database objects are also dropped. Thus, if you drop a database, all objects defined in that database are also dropped. The cascading DROP effect complicates the job of changing a database schema.
Making physical changes to actual database objects is merely one aspect of database change. Myriad tasks require the DBA to modify and migrate database structures. One daunting challenge is to keep test databases synchronized and available for application program testing. The DBA must develop robust procedures for creating new test environments by duplicating a master testing structure. Furthermore, the DBA may need to create scripts to set up the database in a specific way before each test run. Once the scripts are created, they can be turned over to the application developers to run as needed.
Another challenge is recovery from a database change that was improperly specified, or backing off a migration to a prior point in time. These tasks are much more complicated and require knowledge of the database environment both before and after the change or migration.
These challenges justify the existence of database change management tools that streamline and automate database change management. This type of tool manages the change process and enables the DBA to simply point and click to specify a change. The tool then handles all of the details of how to make the change. Such a tool removes from the shoulders of the DBA the burden of ensuring that a change to a database object does not cause other implicit changes. Database change management tools reduce the amount of time, effort, and human error involved in managing database change.
Of course, this discussion of database change management has been brief. A thorough investigation of the topic would reveal many additional nuances, such as metadata management, integration database, and application changes, regulatory compliance requirements, managing database change requests, the impact of change of recovery management, database comparison, and so on. The bottom line is that database change management can be complicated and therefore requires knowledgeable DBA staff and robust system management software to be handled appropriately.
Craig S. Mullins is president of Mullins Consulting, Inc. He’s an IBM Gold Consultant and the author of two best-selling books, DB2 Developer’s Guide and Database Administration: The Complete Guide to DBA Practices & Procedures. Website: www.mullinsconsulting.com. | <urn:uuid:ca20bf15-4f5f-4b4a-9566-3fffa93a92a6> | CC-MAIN-2022-40 | https://www.dbta.com/Columns/DBA-Corner/The-Impact-of-Change-on-Database-Structures-101931.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00376.warc.gz | en | 0.877702 | 1,052 | 3.109375 | 3 |
Predictive analytics, the art and science of ferreting out actionable insights from data, has been in vogue for some now. Predictive analytics combines multiple techniques along its process of data exploration, data wrangling, model development, and validation. Open source and proprietary tools are both available for the various tasks involved in predictive analytics. Predictive analytics provides insights from past data that could be used for future initiatives.
What are the components of predictive analytics?
Statistical techniques form an important and significant part of predictive analytics, especially in data exploration. A new or unknown dataset should ideally be explored first. In the process of exploration, we may find missing elements in a dataset or some likely anomalous entries. We may also find different types of data — ratio scale, categorical, images etc., present in a raw form with different orders of magnitude. Thus data-wrangling becomes an essential part of the exploration phase of predictive analytics. This could possibly include imputation of missing values, normalization of ratio scale data, encoding categorical data amongst other aspects. Data visualization techniques are very handy to visualize a dataset after data wrangling. The data visualization phase not only helps understand the broad trends but also helps us develop apriori hypotheses.
The second important aspect of predictive analytics is to build a model or develop an algorithm after the data exploration phase. In this phase both statistical techniques and computer science based machine learning, deep learning algorithms are used either individually or sometimes in combination. In many cases, predictive analytics is used to do one of exploring, classify or predict. In classification approaches, achieving linear separation is ideal but not always straightforward. Techniques such as decision trees, random forest, logistic regression, support vector machine etc. are normally used depending on the type of data and the problems being inquired into. The figure below indicates the broad classification of the types of approaches under machine learning.
Data exploration as already broached before is normally considered unsupervised, as we do not have a target class or target variable in mind while exploring data. The model development phase involves one of prediction or classification and is classified under supervised learning. In prediction approaches, the objective is normally to predict a target variable, regression techniques such as linear, polynomial, and logistics regression amongst others are used for the purpose.
The ratio scale and categorical data can be explored and modeled using the above-mentioned techniques, however, images would need different techniques to analyze. Deep Learning approaches such as Convolutional Neural Network (CNN) and Capsule Nets are used for developing insights from image data.
Applications of predictive analytics
So far we have seen the use of statistics and the use of computer science based algorithms in predictive analytics. However without including the business side predictive analytics would not be complete. What are some applications of predictive analytics that have become common now? Let us consider some of them here:
Traditional supply chain models seem to focus on distribution efficiencies and largely employ spreadsheet based dashboards for monitoring vendors and inventory amongst others. However, the interrelatedness of various aspects of supply chain is quite complex to be monitored by spreadsheets. Ideally organizations should be able to run ‘What If Analysis’, have a data based approach for managing trade-offs amongst customer service, inventory, and supply chain costs, and lastly be able to model profitability. As we can see all of these point towards an organization becoming competitive. The machine learning techniques mentioned before would allow for such analyses to be done.
What would interest a marketer? Very likely the following — knowing her customer’s purchases, types of products purchased, value and volume of purchases, the geographical distribution of these purchases, and which customers to target for her future campaigns. All of these form part of the customer segmentation process. Machine learning techniques are very appropriate for such a purpose.
What would a credit card company be interested in? Probably know its customers as mentioned before, in addition, it would also like to know about the likely misuse of its card. A credit card misuse probably occurs a few times in a million transactions. How does one pick these misuse or anomalies and how do one alert a bank and its customers. Machine learning techniques could be put to productive use to pick such anomalies.
Predictive analytics is unlikely to solve all the problems a company or an organization faces. However, if an organization bases its decisions on data and insights from data it only likely to become more competitive in its activities. | <urn:uuid:2a0c1531-2d5c-4736-a96e-9fab071530d0> | CC-MAIN-2022-40 | https://expersight.com/predictive-analytics-interweaving-of-statistics-computer-science-and-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00576.warc.gz | en | 0.939632 | 917 | 2.90625 | 3 |
Researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have created a device that allows them to “see” what a person is doing and track his or her movement even if that person is located behind a wall, and does not hold or wear any other device that would enable tracking.
The device, called RF Capture, transmits specific radio frequency (RF) signals aimed at the wall. While the signals pass through the wall, they reflect off a person’s body back to the device, which then uses them to create a silhouette of the person, and track its movements. The researchers have developed a series of algorithms that minimize the noise, so that the output is more clear.
But the device can also, with a high degree of certainty (nearly 90 percent), distinguish between different individuals.
Depending on the person’s movements, only some of their body parts reflects the wireless signals back to the device. The device is able to monitor how these reflections change as a person moves, and it creates, over time, a single image of the person’s silhouette – a “silhouette fingerprint”, if you will.
The researchers were thus able to “train” the device to distinguish between different subjects.
Here is a video demonstration of how the device works:
The researchers have thought out of different ways to use this technology – in gaming, movie production, smart homes.
“We’re working to turn this technology into an in-home device (Emerald) that can call 911 if it detects that a family member has fallen unconscious. You could also imagine it being used to operate your lights and TVs, or to adjust your heating by monitoring where you are in the house,” Dina Katabi, an MIT professor involved in the research and co-author of the paper explained. | <urn:uuid:6bc7d4d0-6086-4dac-a40e-bf9770625b28> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2015/11/02/researchers-can-identify-people-through-walls-by-using-wireless-signals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00576.warc.gz | en | 0.960122 | 386 | 3.21875 | 3 |
Cambridge Researcher Proposes ‘S-Money’ Based on Quantum Theory
(Computing.uk) Researchers at the University of Cambridge have proposed a theoretical form of virtual money called “S-Money” that would be highly secure, fast to transfer, and could also enable financial transactions on galactic scale. The idea of S-money was inspired by quantum theory and the theory of relativity and is the brainchild of Professor Adrian Kent from the Cambridge University’s Department of Applied Mathematics and Theoretical Physics and the lead author of the study.
S-money refers to virtual tokens designed for high-value fast transactions. It proposes the creation of a financial network in which ultra-secure virtual tokens would appear at just the right place and at right time. This new approach would also protect funds from potential attacks by quantum computers in the future. Moreover, it would enable funds to be transferred over essentially any distance, even across the solar system and beyond, Kent suggested, at some point in the far future. | <urn:uuid:37aeffdc-5e7e-42bf-91ba-825aa3ec5b6d> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/cambridge-researcher-proposes-s-money-based-quantum-theory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00576.warc.gz | en | 0.945925 | 208 | 2.75 | 3 |
The NBA recently opted to form a “bubble” concept in Disney’s facilities. Although it has been a relative success, it has also been a USD 200 million temporary solution. This begs the question: How long can football stadiums (and football) survive like this without spectators present?
History tells us that stadiums, venues and football recover from disasters, so what can stadiums do to speed up the process? This is quite possibly going to be the catalyst for AI to be integrated on mass level to football stadiums around the world.
Artificial Intelligence in the aftermath of Covid-19
The role of AI in getting fans and spectators back is huge, through capabilities such as:
- Social distance monitoring
- Crowd scanning/metrics
- Facial recognition
- Fever detection
- Track & trace
- Providing behavioural analytics
AI solutions are already being rolled out and are now utilised by national leagues, professional football clubs and governing bodies. AI is providing cost-effective security solutions in the aftermath of COVID-19, football clubs and stadiums alike are utilising their current surveillance cameras and simply implementing AI surveillance software.
This is now creating a more collaborative effort from the operations team in stadiums, rather than purely security. AI surveillance software when implemented into the surveillance cameras can be accessed by designated users on any device and on any browser platform, making stadium operations more fluid, streamlined and prepared.
Crowd metrics and track & trace
Equipping stadiums with AI-powered surveillance tools can detect crowd metrics such as “people counting” and group statistics. This ensures stadium personnel can monitor social distancing with precision, accuracy and immediately. Alerts can be set up throughout parts of the stadium to alert senior staff members when overcrowding can appear with real-time videos, analytics and photos to their hand-held device, such as a smartphone. Thermal cameras have been implemented throughout facilities including stadiums and are helping assist to spot people with elevated temperatures.
With several games seen as “super-spreading events” last season, it is imperative stadiums now have the ability to track and trace. AI is now powering stadiums with the ability to track individuals in real-time and after the event has concluded through innovative practices like “searchveillance”.
Pandemic monitoring by facial recognition
Through facial recognition, staff members will be able to locate individuals through simply uploading a photo. It has never been easier to find a person of interest. With masks becoming an everyday part of society, facial recognition has come under scrutiny regarding the accuracy when a mask is worn. AI surveillance software organisations are still maintaining a 96% accuracy with individuals wearing masks and can set up alerts for any individuals not wearing a mask.
Another important aspect of facial recognition is finding persons of interest quickly through pre-mentioned practice of “searchveillance”. The future is here, where designated staff can track a person from when they enter the stadium by simply uploading their photograph. An example of how this can assist stadium personnel is to help relocate lost children inside the stadium with their guardians/parents when they are separated. Another attribute would be any individuals banned from entering the stadium would trigger alerts once they appear under surveillance, a fantastic collaborative tool to use with Law Enforcement.
Return on investment
With security solutions, one of the biggest issues with any security investment is a lack of a ROI. This is where AI security is breaking the mould. The ability to provide business analytics, consumer/fan behaviours, traffic patterns, etc, allows other departments within the organisation to gain vital information that can assist with strategies and practices.
Stadium operations will never be the same in a post-COVID world, so why will its practices stay the same? AI in stadiums is no longer the future: it’s already a reality in 2020. | <urn:uuid:d889c3bf-c979-4ac2-95cc-04203bb7f9bb> | CC-MAIN-2022-40 | https://irex.ai/blog/tpost/1tizamf341-stadiums-and-ai-in-the-aftermath-of-covi | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00576.warc.gz | en | 0.949512 | 779 | 2.546875 | 3 |
Booting servers from data held on a storage area network (SAN) is becoming an increasingly popular alternative to the traditional process of booting servers from their own internal disks. To understand why, it’s helpful to analyze the pros (and cons) of boot from SAN (BfSAN).
But first, a very simplified overview of how the boot process works: During a local boot, information in a server’s BIOS points to the local boot disk, from where the operating system is loaded. BfSAN is slightly different, because the boot disk, as the name suggests, is on the SAN. In this case, the server relies on the BIOS of its host bus adapter (HBA)(or iSCSI adapter in the case of an iSCSI SAN) to find the boot device on the SAN, from where the operating system can be loaded.
So what are the benefits of Boot from SAN?
Less power, less heat, less state – Removing internal hard drives from servers means they consume less power and generate less heat. That means they can be packed more densely, and the need for localized cooling is reduced. And without local storage the servers effectively become “stateless” compute resources which can be pulled and replaced without having to worry about the data stored locally.
Less server capex – Boot from SAN enables organizations to purchase less expensive diskless servers. Further savings can be made through reduced storage controller costs, although servers still need bootable HBAs.
More efficient use of storage – Whatever the footprint of a server’s operating system, it will always be over-provisioned in terms of internal storage to accommodate it. Using BfSAN the boot device can be configured to match the capacity it requires. That means a large number of servers running a range of operating systems can boot from a far smaller number of physical disks.
High availability – Spinning hard drives with moving internal components are disasters waiting to happen in terms of reliability, so removing reliance on internal hard drives guarantees higher server availability. The servers still rely on hard drives, but SAN storage arrays are much more robust and reliable, with far more redundancy built in to ensure that servers can boot.
Rapid disaster recovery – Data, including boot information, can easily be replicated from one SAN at a primary site to another SAN at a remote disaster recovery site. That means that in the event of a failure, servers should be up and running at the remote site very rapidly indeed.
Lower opex though more centralized server management – BfSAN provides the opportunity for greatly simplified management of operating system patching and upgrades. For example, upgraded operating system images can be prepared and cloned on the SAN, and then individual servers can be stopped, pointed to their new boot images, and rebooted, with very little downtime. New hardware can also be brought up from SAN-based images without the need for any Ethernet networking requirements, and LUNs can be cloned and used to test upgrades, service packs and other patches or to troubleshoot applications.
Better performance – In some circumstances the rapidly spinning, high performance disks in a SAN may provide better operating performance than is available on a lower performance local disk.
As is always, there are some drawbacks to Boot from SAN which have to be weighed against the benefits just described. These include:
Compatibility problems – Some operating systems, systems BIOSes and especially HBAs may not support Boot from SAN. Upgrading these components may change the economics in favor of local boot.
Single point of failure – If a server hard drive fails then the system will be unable to boot, but if a SAN or its fabric experience major problems then no servers may be able to boot. Although the likelihood of this happening is relatively small because of the built-in redundancy in most SAN systems, it is nevertheless worth considering.
Boot overload potential – If a large number of servers try to boot at the same time — after a power failure, for example — this may overwhelm the fabric connection. In these circumstances, booting may be delayed or, if timeouts occur, some servers may fail to boot completely. This can be prevented by ensuring that boot LUNs are distributed across as many storage controllers as possible and that individual fabric connections are never loaded beyond vendor limits.
Boot dependencies – In some server environments, systems may depend on Active Directory (AD) servers, which may not be available after a power failure. To mitigate this risk it may be necessary to allow one of more AD servers to boot from local disks before the rest of the environment is booted from the SAN.
Configuration issues – Diskless servers can easily be pulled and replaced, but their HBAs have to be configured to point to their SAN-based boot devices before they boot. Unexpected problems can occur if a hot-swappable HBA is replaced in a running server: unless the HBA is configured for Boot from SAN the server will continue to run but fail to boot the next time it is restarted.
LUN presentation problems – Depending on your hardware, you may find that some servers can only BfSAN from a specific LUN number. If that’s the case, then you will need to have some mechanism in place to present the unique LUN that you use to boot a given server as the LUN it (and other similar servers) expects to see.
Additional complexity – There is no doubt that BfSAN is more complex than common local booting, and that adds can introduce an element of operational risk. As IT staff get accustomed to the procedure, however, this risk should diminish. But the potential for problems in the early stage of Boot from SAN adoption should not be discounted.
For detailed technical instructions on booting from Fibre Channel SAN in a Microsoft Windows environment, see this document http://www.microsoft.com/download/en/details.aspx?id=2815
Paul Rubens has been covering IT security for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch. | <urn:uuid:ae1e3fc3-0515-4562-9fcb-d0d0de0fb46c> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/guides/the-pros-cons-of-booting-from-san/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00576.warc.gz | en | 0.931443 | 1,252 | 2.8125 | 3 |
Protecting Data Requires a New Level of Resolution
At Cyberhaven, we enable organizations to directly protect the information that is the most valuable to them regardless of what the content is or how it is used. In short, organizations can now greatly extend their visibility into data risk and enforce real-time policies to mitigate that risk and prevent loss.
This is made possible by new advancements in data flow tracing and graph analysis that follow the full history and interaction of all data in the enterprise. Building a complete risk-based context for every piece of data in an enterprise is a profound undertaking, and in this series, we are taking a closer look at how this is done.
In our first installment of the series, we covered the basics of what graph analysis is and how it can be used to solve security problems in completely new ways. In Part 2, we will start to dig deeper into Cyberhaven technology and how new types of analysis are being fueled by a new ultra-high-definition graph.
Building a New Source of Truth
There is a well-worn trope in movies and TV shows like NCIS, in which an investigator squints at a blurry surveillance image or video and utters the magic word…enhance. The view zooms multiple times, somehow becoming clearer in the process, and the all-important clue is revealed. This, of course, is not at all the way things work in the real world. While there are always inventive ways that we can reinterpret data or see it from another angle, eventually what we can know, and know conclusively, is constrained by the source data we have. It’s why scientists build evermore sensitive instruments and spend billions of dollars on things like the James Webb Telescope. If we are going to get better answers, we don’t just need more data, we need better data.
This is very true when it comes to security as well. While graph analysis of data flow can be a powerful tool, the analysis will always be constrained by the data that we have. So in order to protect data, we need an incredibly detailed and complete graph that captures and connects every movement or action performed on a piece of data. This can include every time a user edits a file, data is copy-pasted from one file to another, shared across applications, transformed from one format to another, and so on. In each case, exactly what data is being affected and to follow that no matter where it goes or how it is transformed.
For example, a user may copy/paste data about a particular customer account from a Salesforce.com browser tab to a Word doc. We need to be able to recognize that that particular document now has information about that particular customer account. This requires us to analyze and maintain deep context across virtually any application where data can be transmitted or modified. This could include everything from an end-user chat function on a social media application to backend API integrations between enterprise SaaS applications in the cloud.
Ultimately, what we are talking about is shifting from an event-based perspective to a flow-based perspective that encompasses all the complex interactions, movements, and relationships that actually define what data means to a business. There are two very important improvements that are needed in order to achieve this goal :
- Capture High-Resolution Details – First, we collect far more detail than the traditional “events” collected by say an EDR tool and far beyond that of a SIEM or UBA. For example, a user’s browser may have hundreds of network connections and process hundreds of files. It isn’t enough just to see that a browser interacted with a specific file. We need to know if it was actually uploaded and if so, which of the hundreds of network connections it was sent to. We would need to see “inside” the browser to know that a file was specifically sent to Box, and just as importantly, if it was sent to a user’s personal Box account or the corporate Box account. Likewise, we need to see and track virtually any user action on a device such as copying data into another file, renaming a file, encrypting a file, etc. And again, we need to track the flow of data for that specific action and not just see that a file was saved.
- Support Any Application Natively – Next, we need to be able to capture information and automatically trace the flow of data in virtually any off-the-shelf application without the need to modify the app itself. Previously, data lineage tools required making changes to each application, typically requiring work from the developers who built the application in the first place. However, if we want to truly protect enterprise data, we need to support all applications, such as browsers, office apps, email clients, collaboration tools, source code tools, as well as custom applications built by enterprises. Just as importantly, we need to support these applications in a lightweight way that doesn’t interfere with the application itself or require work from developers.
Combining Data Flows With Graph Analysis
These requirements were the driving factors behind a variety of Cyberhaven breakthroughs in the way that endpoint agents can track data lineage. And ultimately, all of this background work is designed to help us build a data set that can understand data flows and then perform graph-based analysis on those data flows.
Let’s look at how these two concepts fit together. Consider a user who receives a sensitive email, copies data from the email, pastes it into a Word document, and then uploads that document to Google Drive. In order to track this narrative, we need to retain the context of what window on the user’s machine the data was copied from. We need to be able to see that it was an email and know who that email was from. We need to record the copy action and connect it to the window where that content was pasted, which document was open in that window, and then track that document. We then need to track to see if the user then tries to share that file in any unapproved way such as uploading the file to a personal Google Drive.
This is a relatively straightforward data flow. And understanding this flow is crucial to understanding what is really happening (or has happened) to a piece of data. Graph analysis allows us to “walk” or connect the dots across large numbers of these flows in order to get a true enterprise-wide context of the data and the data risk. Every day, a single user will perform dozens or even hundreds of similar actions. An enterprise will have hundreds of users, machines, and applications with data flows moving between (or within them). Data will exist over long periods of time, so we will need to be able to connect all of these data flows across all of these entities. The graph provides the superset of all this data so that we can pull the thread in any direction to get to the answers we need.
For instance, if we revisit the example of a user uploading a file to a personal Google Drive account, we may need to trace back the full lineage of that file to its origin to understand the risk. Maybe the content was downloaded from an HR app, then pasted into another document, and shared between multiple users. Graph analysis lets us connect all these flows to know in real-time that the data in question is sensitive and should be blocked from being uploaded to a personal account. On the other hand, let’s say we want to run the example in the other direction, and we want to know where all the copies of employee salary data are located within the company. In this case, we would walk all the many divergent branches of the graph in order to find every file, user, machine, or app that contains copies of that data.
Naturally, this requires graph analysis on a much larger scale than we have covered so far. In the next installment in the series, we will take a closer look at the challenges of analyzing this new type of graph at enterprise scale at very high speeds, and ultimately, all the important things you can do with it in your practice. | <urn:uuid:77022ae9-638b-447e-85b3-bfe3772ddc57> | CC-MAIN-2022-40 | https://www.cyberhaven.com/blog/protecting-data-requires-a-new-level-of-resolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00576.warc.gz | en | 0.954021 | 1,662 | 2.546875 | 3 |
A Living off the Land (LotL) attack describes a cyberattack in which intruders use legitimate software and functions available in the system to perform malicious actions on it.
Living off the land means surviving on what you can forage, hunt, or grow in nature. LotL cyberattack operators forage on target systems for tools, such as operating system components or installed software, they can use to achieve their goals. LotL attacks are often classified as fileless because they do not leave any artifacts behind.
Most LotL attacks employ the following legitimate tools:
- PowerShell, a script-launching framework that offers broad functionality for Windows device administration. Attackers use PowerShell to launch malicious scripts, escalate privileges, install backdoors, and so on.
- WMI (Windows Management Instrumentation), an interface for access to various Windows components. For adversaries, WMI is a convenient tool for accessing credentials, bypassing security instruments (such as user account control (UAC) and antivirus tools), stealing files, and enabling lateral movement across the network.
Risks associated with LotL attacks
Attackers do not leave traces in the form of malicious files on device hard drives, so Living off the Land attacks cannot be detected by comparing signatures.
Additionally, operating system tools, such as PowerShell and WMI, may appear in the security software’s allowlist, which also impedes detection of their anomalous activity.
Finally, adversaries’ use of legitimate tools also complicates the investigation and attribution of cyberattacks.
Protection against LotL
To counter LotL attacks, cybersecurity professionals use solutions based on behavioral analysis. The technology detects anomalous program and user activity – actions that could signify an attack in progress. | <urn:uuid:d081c1ab-41a8-405e-bc4a-7c24f8ca4751> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/glossary/lotl-living-off-the-land/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00576.warc.gz | en | 0.893672 | 357 | 2.953125 | 3 |
Back in the 18th century, the world was witnessing the industrial revolution and we all know how it transformed almost every aspect of human life. Times have changed now and presently it’s something entirely different that is making the world go round.
Yes, you guessed it right. It’s the age of the smartphone revolution. And when it comes to smartphones, it’s all about the apps.
The fascinating world of mobile apps gets bigger day by day.
According to Techjury, smartphone consumers will download more than 258 billion mobile apps by 2022. Moreover, on average, every user goes through 8-10 apps per day.
The stats are amazing, aren’t they? They seem promising to hackers also.
As the trend of mobile apps continues to grow at an exponential rate, so does the frequency of app-based attacks. According to studies, one in every four mobile apps has a high-risk vulnerability. And if left unexamined, they might lead to serious outcomes.
That is why it becomes essential for app developers to not only look out for adding advanced features in their apps but also concentrate on the security front. Since most of the security issues can be mitigated during the development stage itself, a great responsibility lies on the shoulders of coders to bolster their app’s security.
Here, we are going to dive deeper into the world of secure coding practices and find out how developers can work smartly and maintain the security of their mobile apps while coding. But before moving ahead, let’s first understand the types of mobile apps and how they can be attacked.
Threat Landscape of Mobile Apps
In order to understand the threat landscape of mobile apps, it is crucial to understand their different types.
Types of Mobile Application
Basically, there are three kinds of mobile apps:
- Native Apps: These apps are native to a particular platform like Android or iOS. And as they are optimized for a particular platform, they work efficiently and offer a better user experience. However, due to their structure, it is somewhat difficult to maintain them.
- Hybrid Apps: As the name suggests, these apps contain a mix of the features of native and web apps. They can be downloaded from app stores. They can also enjoy device features and at the same time, they rely on HTML and web servers as well.
How Mobile Apps Are Vulnerable?
Most of the time, mobile apps have privacy and security gaps in code functionality. Apps are allowed access to more data points than required and this opens more avenues for threat actors to sniff into the system.
Using unauthorized APIs and untrusted libraries poses similar problems as well. In several instances, a lack of proper encryption also exposes sensitive user data to attackers.
Another common phenomenon that makes mobile apps more vulnerable to attacks is the possibility of reverse engineering. In the later part of the blog, we will see how it deeply impacts the security of mobile apps. But before that, let’s examine the types of attacks that are common in the case of mobile apps.
Classification of Attacks Based on their Occurrences
Based on their occurrence, attacks on mobile apps can be classified into four categories:
- Browser-Based Attacks: Ranging from phishing, clickjacking, and data-caching to man-in-the-middle attacks, browser-based attacks happen over the web servers. In the case of these attacks, hackers inject malicious scripts into app components that are served via web browsers.
- Phone or SMS-Based Attacks: In the case of phone/SMS attacks, hackers target mobile devices by distributing malware via unauthorized messages. Moreover, other phone-based attacks like baseband attacks are also common. In these attacks, threat actors may gain control over the device’s digital baseband processor and manipulate cellular activities.
- OS-Based Attacks: OS-based attacks use the mobile device’s operating system as the attack surface. Generally, manipulations like Android rooting and iOS jailbreaks result in these types of attacks. Weak passcodes and encryption may also lead to these attacks.
- Application-Based Attacks: In these attacks, hackers utilize the vulnerabilities in the mobile app itself to gain access to sensitive user data. Escalated privileges, improper SSL injection, weak encryption, and unwanted permissions may result in application-based attacks.
How to Build a Completely Secure Mobile App? The Checklist
We examined how various types of mobile apps are vulnerable to different types of attacks. Now, we must find out what must be done in order to prevent such attacks from happening.
According to the OWASP secure coding practices checklist, there are numerous ways of achieving this. Basically, a developer can mitigate most of the security vulnerabilities and efficiently maintain mobile app security while coding.
The below infographic highlights some of the secure coding practices which developers must follow in order to build a completely secure mobile app.
However, despite all these measures, there still remains some margin or error. Luckily, there are some techniques that efficiently eliminate the possibility of any remaining mistake. Let us see how.
How Code Obfuscation and Remediation Assist in Mobile App Security?
As soon as any app goes public, so does its source code. Being a developer, you would never want any threat actor to review your code and start tampering with your application. They might even repackage the app with some malicious code.
In order to avoid such problems, the techniques of code obfuscation and remediation come into the picture. These two methods prevent hackers from reverse engineering your app and understanding the business logic and code. They also erase the potential loopholes in the code to ensure your app’s security at all times.
What is Code Obfuscation?
Code obfuscation is simply the method of modifying the source or machine code so as to make it difficult for attackers to read or comprehend it. While the functionality of the code remains the same, obfuscation helps coders in concealing the logic and purpose of the code effectively.
Generally, coders use a tool called an Obfuscator to carry out the obfuscation process. It simply converts the original code into some program that carries out the same functions but makes it nearly impossible for hackers to read or understand the logic of the code. Code obfuscation may also be carried out manually. Some basic steps in code obfuscation may include:
- Encrypting some part of, or the entire code.
- Changing the class or variable names to some vague labels.
- Inserting some meaningless or unutilized code to the application binary.
- Hiding or removing potentially sensitive metadata.
How does Code Obfuscation Work?
The process of code obfuscation consists of some simple but reliable techniques. Together they can build a strong layer of defense and protect your code from attackers. We have listed some basic obfuscation techniques and also explained how they work:1) Renaming Obfuscation: Renaming obfuscation, as the name suggests, changes the aliases of the important methods and variables. Without altering the program execution, this technique makes it harder for any human to understand the modified code. Even if attackers attempt to decipher the logic of the source code, they may have to be super attentive while looking out for elements and variable names.
The modified names may have certain different naming schemes like a combination of letters, numbers or even unprintable or invisible characters. This code obfuscation technique is widely used for mobile application security while coding by a majority of Java, Android, iOS and .NET developers.
2) Control Flow Obfuscation: This form of code obfuscation simply entangles the control flow of the application code. In control flow obfuscation, a valid executable logic is produced using the traditional branching, conditional and iterative constructs. But, upon decompilation, the code would yield non-deterministic semantic results.
This method makes it rather difficult for the hacker to break the logic of the decompiled code. However, employing control flow obfuscation may also result in degraded runtime performance.
3) Instruction Pattern Transformation: In this method, the basic compiler instruction patterns are converted into certain different or vague constructs. Generally, these instructions are suitable for machine languages and map less cleanly with high-level languages like C# or Java.
This technique is very commonly used during transient variable caching. Using this, overhead transient variables are removed from Java or .NET runtimes using their stack-based nature.
4) Dummy Code Insertion: Another way of code obfuscation is dummy code insertion where some dummy code is inserted into the executable. This insertion does not have any effect on the logic or the execution of the program and makes the reverse engineering of the code really difficult.
6) Binary Linking/Merging: In this obfuscation technique, multiple libraries and other input executables are merged into one or more output binaries. When employed with renaming or pruning, linking can also make the size of the application smaller and ease up the deployment process. All of this reduces the information available to the attackers.
7) Opaque Predicate Insertion: Often counted among some of the most secure coding practices, this method introduces irrelevant or incorrect code to confuse the hackers. The code would not be executed and make it difficult for the attackers to comprehend the decompiled output.
This technique works by inserting conditional branches into the code. These branches always refer to some already known results that can’t be determined by code analysis methods.
8) Anti-Tamper: Anti-tamper obfuscation is another primary code protection methodology. Here, application self-protection is injected into the source code to prevent tampering. Whenever tampering is detected, the app performs certain custom actions like limiting its functionality, shutting itself down or invoking random crashes to signal the developers.
Whenever a debugger is used, the anti-debug Obfuscator corrupts data or invokes random crashes to protect user data and prevent debug checks. It may also signal the developers by sending a warning message.
Open-Source Obfuscation Tools
When it comes to selecting the best tool for carrying out code obfuscation, there are a plethora of options out there. There are numerous open-source obfuscation tools available as well. You can select any one of them depending on the use case you have at hand. Some of the well-known open-source code obfuscators include:
- Obfuscar: Released under MIT license, this open-source .NET obfuscator provides a bunch of facilities to secure code in .NET assembly.
- Modifly: Modifly is a Java-based obfuscation tool capable of run-time transformations. It efficiently removes program semantics and performs string encryption as well.
- ProGuard: This free to use Java class obfuscator optimizes bytecode and removes unused attributes, fields and classes to make the code compact and efficient.
What is Remediation?
As the name suggests, remediation is simply the act of damage control. In the case of mobile app security, remediation basically consists of some reliable techniques which developers can implement to prevent attackers from infiltrating into their apps.
Generally, hackers gain insights into a mobile app through reverse engineering. Applying proper remediation techniques will not only make your app more complex but also pose hardships for hackers to crack its code.
How does Remediation Work?
There are several remediation techniques that can enhance the complexity of your code and make reverse engineering difficult. Generally, if your app deals with high volumes of user data, you should consider applying anti-debug methods.
Similarly, if you want to limit runtime manipulation by hackers, you should prefer using C or C++. C and C++ offer several libraries that can be easily integrated with Objective C and protect critical portions of the code. Tools like Class-Dump, Class-Dump-Z or Frida may be used by attackers to expose, manipulate or reverse engineer your app’s code.
Generally, Android and iOS apps become more susceptible to such attacks because of their basic design. In the case of Android apps, you can avoid this by using JNI (Java Native Interface). And when it comes to iOS apps, you should prefer writing code portions in low-level C so as to avoid manipulation or reverse engineering.
Here we have mentioned some other common secure coding practices as well:
1) Restricting Debuggers: An attacker’s ability to interact or interfere with an application’s runtime may be reduced by preventing its attachment to any debugger. If this is ensured, an attacker has to first decipher the debugging restrictions before breaching into the app on a low level.
In order to add this complexity to your mobile app, you may implement several methods. In the case of Android apps, developers should set ‘android: debuggable = “false”’ in the app manifest. This would prevent runtime manipulations and also the injection of malware. ‘PT_DENY_ATTACH’ may be used in the case of iOS apps.
2) Trace Checking: If you are concerned about mobile application security while coding, you must consider the trace checking strategy. It is possible for a mobile app to check whether or not it is being tracked by a debugger or any other debugging tool.
There are several ways of determining this like evaluating the parent process, checking the return value of ‘ptrace attach’, comparing timestamps on various instances of the program, checking process status flags or blacklisting debuggers.
After checking for debuggers, the app may perform several defense actions in response like warning the server admins or discard encryption keys in order to secure sensitive user information.
3) Using Compiler Optimizations: On several occasions, your code may consist of advanced mathematical calculations and other complex logic. It is possible to hide them by employing compiler optimizations and obfuscating your object code. This would make matters rather difficult for hackers as they won’t be able to disassemble or understand the code.
This can be easily achieved in Android apps by using the specialized libraries from the Native Development Kit (NDK). Moreover, using tools like LLVM Obfuscator may also assist in proving smoother machine code obfuscation.
4) Stripping Native Binaries: Stripping binaries is another effective remediation technique that strongly tests the nerves of an attacker. This technique can effectively enhance the time required and test the skill of the attackers while they try to grasp the structure and makeup of your mobile app’s low-level functions.
Stripping binaries remove the symbol table of the concerned binary and therefore, the attacker faces difficulty in debugging or reverse-engineering the app. However, stripping binaries does not remove object-mapping data in iOS or the classes in Objective-C. In the case of Android apps, methods used on GNU/Linux systems Sstrip or UPX may be used.
In the upcoming years, security will definitely play a major role in differentiating one mobile app from the other. And that is why following secure coding practices with a focus on the elements of security becomes a necessity. Before any mobile app goes public, it must be ensured that it follows the basic security requirements.
Strategic coding practices like obfuscation and remediation testing also have their own significance as they protect the core logic and purpose of the application’s code. All of these coding etiquette when combined together, will form an invincible arsenal that will never allow your mobile app to get defeated at the hands of attackers. | <urn:uuid:4a1be5e4-ac5b-456a-ac43-63929399261a> | CC-MAIN-2022-40 | https://www.appknox.com/blog/how-to-maintain-mobile-app-security-while-coding | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00576.warc.gz | en | 0.920983 | 3,299 | 2.65625 | 3 |
Before getting into the weeds of HSTS, let’s talk a little about a protocol called HTTPS. Today, everyone and their mother knows that, unless you see that padlock icon next to your browser’s location bar, your communication is not secure and should not be trusted. But very few people understand the meaning of the word “secure” here, or how these security features work.
What is HTTPS, and Why Is It Important?
Think about a very simple scenario. You need to check your web based email account, and this service provider doesn’t support HTTPS, only HTTP. You need to log in to get your mail. As you’re presented with a sign-in form, you type in your username as johnsmith1 as well as your top secret password, which is fluffy2009. Upon submitting your info, the same exact characters go out of your computer and hit the internet as:
You know that the internet is full of bad people, right? And you know that they are waiting for opportunities like this to steal people’s passwords, right? With the simplest of software, they can grab your credentials and pretend to be you to snoop into your email messages when you are not around. You can say, “so what? I have nothing to hide in my email.” Consider that in this hypothetical scenario, your bank account is tied to the same email address. Does it scare you now? It should. The crooks can request a password change and get it in your email. Two seconds later, you can kiss your money in the bank goodbye.
Using HTTPS changes the clear text string above into something like this, referred to as encryption:
It’s all thanks to that letter “s” at the end of “http.” Good luck getting a username or password out of this string now, Mr. Crook.
What Is HSTS Then, and Why Is It Important?
HSTS is an acronym that stands for HTTP Strict Transport Security. Technically speaking, it is a response header sent by the website to the browser, telling the browser that this site can only be contacted via HTTPS protocol, not by plain text HTTP.
Let’s dig a little deeper into the subject matter with an example. Imagine you want to go visit google.com to run a search. You go to the address bar of your browser and usually type google.com, hit enter and voila, you are there. You do not think about http:// or https:// protocol specification when you are typing the URL you want to go to.
So far, so good, right? websites who prefer https protocol communication generally use a 301 redirect command that changes the URL’s protocol to https and sends your browser back an encrypted response along with the SSL certificate information. This allows your browser to understand how to communicate with this website. Basically, the http communication is transferred to https communication automatically, without the end user doing anything.
Well then, no-harm, no-foul, right? Not necessarily. In the time it takes for the website to switch to https by redirecting via 301 status, there are few milliseconds of time delay. Then there’s the time required for the site to send the SSL certificate header back to the browser. If the website is a valuable target, those few milliseconds are almost a lifetime for a malicious actor with an extremely powerful computer system to intercept the communication and insert itself between the user and the server. This is the basic definition of “man in the middle” attack vector.
On the other hand, if the browser knows that the site is accepting https communications only, by the way of an HSTS table, it starts the communication directly with https:// protocol lead, avoiding the the http to https transfer event time gap. This leaves the malicious listeners of the network traffic with nothing to listen to.
It is an added level of security in layman’s terms.
How Does Using HSTS Headers benefit Your Website?
First and foremost, HSTS makes your website virtually impenetrable by the man-in-the-middle attack method. This should give your site’s visitors an extra level of reassurance, especially if your site is an e-commerce site where monetary transactions take place.
Other than the security aspect, by skipping that few milliseconds of delay at the beginning, you shave off quite considerable time of page load traffic. In a day like today, where a website’s presence in search engines is greatly influenced by page load times, this increase in page load speed is as valuable as gold. Did you know that an average website load time today is 15.3 seconds? Did you also know that an average internet user hits the back button or leaves the site if they can not see the page in 3 seconds or less?
Do you really want to lose customers because your site exposes their information to hackers and loads slowly? Probably not.
Between better page load times and improved security, what is there not to like about implementing HSTS? Of course your site can benefit from HSTS headers. | <urn:uuid:2020b034-b3f8-44eb-b452-81490dbbc719> | CC-MAIN-2022-40 | https://mytekrescue.com/what-is-hsts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00576.warc.gz | en | 0.924588 | 1,153 | 3.484375 | 3 |
Four multinational corporations, five global banks, sixteen cities and former President Bill Clinton have formed a consortium dedicated to retrofitting buildings in order to reduce energy consumption and greenhouse gas emissions, according to an announcement from the Clinton Foundation Wednesday during the C40 Large Cities Climate Summit in New York.
ABN AMRO, Citi, Deutsche Bank, JPMorgan Chase and UBS banking institutions have pledged US$1 billion each for the Energy Efficiency Building Retrofit Program, which will provide cities and private building owners with the capital to retrofit existing structures with more energy-efficient technologies.
“Climate change is a global problem that requires local action,” President Clinton said. “The businesses, banks and cities partnering with my foundation are addressing the issue of global warming because it’s the right thing to do, but also because it’s good for their bottom line.”
“They’re going to save money, make money, create jobs and have a tremendous collective impact on climate change all at once. I’m proud of them for showing leadership on the critical issue of climate change and I thank them for their commitment to this new initiative,” he continued.
Tools for Measurement
In addition to its retrofitting initiative, the Clinton Foundation announced a partnership with Microsoft Thursday to develop new technology, including software and Web applications, to help cities track and share strategies to reduce their carbon emissions.
The software tools would create a standardized method for cities around the globe to measure their greenhouse gas emissions. Using a common standard, cities would then be better positioned to gauge the effectiveness of their carbon-reduction initiatives.
Microsoft said the free software should be available by the end of 2007.
Cities use about 75 percent of all energy, and are responsible for the transmission of 75 percent of greenhouse gases into the atmosphere, according to the Clinton Foundation. Buildings in these urban areas are responsible for roughly 40 percent of global greenhouse gas emissions, with structures in older, larger cities such as New York and London accounting for nearly 70 percent. Retrofitting older buildings would provide energy savings between 20 to 50 percent.
Sixteen cities, including New York; Chicago; Houston; Johannesburg, South Africa; Melbourne, Australia; Mumbai, India; Sao Paulo, Brazil; Toronto; Mexico City; London; Berlin; Tokyo; and Rome, will be the first to benefit from the $5 billion worth of financial aid that will pay for projects like replacing heating, cooling and lighting systems with newer, energy-efficient technologies, installing energy-efficient windows, and updating roofs with reflective or white materials to reduce absorption of the sun’s heat.
“It is a potentially interesting idea, though [I’m] not sure of [its] odds of success at this stage,” Jeff Lipton, managing director of investment banking at Jefferies & Company, told TechNewsWorld. “Much focus has been on generating alternative energy at a lower cost, [competitive with traditional fossil fuels]. This need is clear, but much less focus has been on reduction in usage, and at the end of the day, that is an important part of the overall equation.
“No one technology or approach will solve or address the current situation,” he added.
Green for Green
Honeywell, Johnson Controls, Siemens and Trane will conduct energy audits, perform building retrofits and guarantee the project’s energy savings. Meanwhile, energy efficiency finance specialist Hannon Armstrong and the Clinton Climate Initiative, in partnership with the banks, will create mechanisms to deploy the capital globally. City governments and building owners plan to repay the loans with money generated from the energy savings achieved as a result of the retrofit.
However, the savings may not be sufficient for less wealthy municipalities and building owners to repay their loans, according to Sanjiv Bhaskar, director for the Environmental & Building Technologies Practice at Frost & Sullivan.
“It will depend on the savings potential in each case,” he told TechNewsWorld. “In some cases it will have a higher return on investment; in some cases it will have a lower return on investment. Energy audits will help in narrowing down on the cases with highest return on investments.”
The selected cities have agreed to develop a program to make their municipal buildings more energy efficient and provide incentives for private building owners to also undergo the retrofit to make them more environmentally friendly, the foundation said. The program will be consistent with city procurement and tendering rules.
Local banks and other companies will also be invited to contribute to the project’s funding pool and to increase the list of green products used in the retrofits.
Old Versus New
The top ten carbon dioxide emitting countries are the U.S., China, Russia, Japan, India, Germany, Canada, the UK, South Korea and Italy, according to Frost & Sullivan. The U.S. and China rank No. 1 and No. 2, respectively, by a significant margin.
With developing countries such as China and India undergoing a building boom, some environmental experts believe it would me more prudent to ensure that those buildings are energy efficient and benefit from the latest green technologies.
“The funds need to be applied where it makes sense to do so,” James Wilson, author of The Alternative Energy Blog, told TechNewsWorld. “But I heard a statistic the other day that by the time a child in the West is two-and-a-half years old, it has produced more carbon emissions than someone in Tanzania will do over the course of their entire lifetime.
“The West is where the bulk of carbon emissions are being generated at the moment,” he continued. “There are obviously nations like China and India that are rising powers and are due to overtake the U.S. and West in terms of emissions in a few years. I would like to see more transference of efficient technology to increase efficiency especially as China builds one new power station per week.”
Cost savings from retrofitting buildings is not proven, warned Sterling Burnett, a senior fellow at the National Center for Policy Analysis. In some cases, cities that have gone green have seen a decrease in their energy reliability.
“My first response is that it seems good, especially if it saves taxpayers money,” he said. “But the devil is in the details. If all they do is provide the seed money for these retrofits that are quite costly and may or may not deliver the cost savings and reliability, then this becomes one big boondoggle and we are taking symbolic action without delivering any real benefit.” | <urn:uuid:633b2cd3-b92e-418d-9b76-5618ee188764> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/bill-clinton-spearheads-green-cities-initiative-57440.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00576.warc.gz | en | 0.942877 | 1,366 | 2.78125 | 3 |
Business Layer Behavior Concepts
Based on service orientation, a crucial design decision for the behavioral part of the ArchiMate metamodel is the distinction between “external” and “internal” behavior of an organization. A behavioral concept can have both an external and internal connotation based on how it is used in a specific instance.
External behavior is modeled by the business service concept by representing an internal function providing value to an external environment (e.g., the customer).
Internal behavior is modeled by the business service concept by offering supporting functionality to processes or functions within the organization.
See the Appendix for examples of using Business Behavior Concepts
Business Behavior Concepts
Business Process – the identification of a flow of a system
A business process describes the internal behavior performed by a business role that is required to produce a set of products and services.
A business process represents a workflow or value stream consisting of smaller processes/functions, with clear staring points and leading to some result.
It is sometimes described as “customer to customer” where the customer may also be an internal customer.
A business function offers functionality that may be useful for one or more business processes.
It groups behavior based on, for example, required skills, resources, application support, etc.
Where business processes of an organization may be defined based on the products and services that an organization offers, the business functions are the basis for things like assignment of resources to tasks and application support.
A business interaction is similar to a business process/function, but while a process/function may be performed by a single role, an interaction is performed by a collaboration of multiple roles. The roles in the collaboration share the responsibility for performing the interaction.
A business interaction may be triggered by, or trigger, any other business behavior element. A business interaction may access business objects. A business interaction may realize one or more business services and may use (internal) business services or application services. A business collaboration or an application collaboration may be assigned to a business interaction.
Business processes and other business behavior may be triggered or interrupted by a business event. Also, business processes may raise events that trigger other business processes, functions, or interactions. A business event is most commonly used to model something that triggers behavior, but other types of events are also conceivable; e.g., an event that interrupts a process.
Unlike business processes, functions, and interactions, a business event is instantaneous: it does not have duration. Events may originate from the environment of the organization (e.g., from a customer), but also internal events may occur generated by, for example, other processes within the organization.
A business event may trigger or be triggered (raised) by a business process, business function, or business interaction. A business event may access a business object and may be composed of other business events.
A business service exposes the functionality of business roles or collaborations to their environment. This functionality is accessed through one or more business interfaces. One or more business processes, business functions, or business interactions that are performed by the business roles or business collaborations, respectively, realize a business service. It may access business objects.
A business service should provide a unit of functionality that is meaningful from the point of view of the environment. It has a purpose, which states this utility. The environment includes the (behavior of) users from outside as well as inside the organization. Business services can be external, customer-facing services (e.g., a travel insurance service) or internal support services (e.g., a resource management service).
A business service is associated with a value. A business service may be used by a business process, business function, or business interaction. A business process, business function, or business interaction may realize a business service. A business interface or application interface may be assigned to a business service. A business service may access business objects. | <urn:uuid:187e4ace-e6a7-481d-a59c-75d1d9f6e345> | CC-MAIN-2022-40 | https://blog.goodelearning.com/subject-areas/archimate/archimate-2-0-important-part-enterprise-architects-tool-box-part-7-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00776.warc.gz | en | 0.935362 | 803 | 3 | 3 |
The momentum to take compute and data closer to the edge is increasing. However, today’s data explosion and evolution of end devices raise the need for network infrastructure that can support massive data volumes and increasingly sophisticated edge devices. A combination of 5G and edge computing promises to satisfy these needs.
5G and Edge Computing
5G and edge computing are technologies that can capitalize on a symbiotic relationship to empower a new generation of smart devices and applications. Through its increased performance, 5G can enhance edge computing applications by reducing latency, bettering application response times, and improving the ability of enterprises to collect and process data.
The number of edge devices increases every day, with their capabilities continuously evolving. Internet of Things (IoT) devices are also becoming more sophisticated, as they can collect more types of data. The data generated by these devices fuels the need for actionable insights to help enterprises stay atop of trends, forecast new products and services, and create a competitive advantage.
Human beings generate more than 2.5 exabytes of data daily. Imagine remotely sending approximately 1.7 megabytes per second for each person on earth to be processed centrally.
This would result in strained network resources, which yields performance degradation due to latency, roundtrip delays, and poor use of bandwidth. This data deluge, the struggles of moving it, and the inefficiencies of remote data processing reinforce the need for 5G and edge computing to be leveraged together.
Additionally, more responsibility is being placed on edge devices as the COVID-19 pandemic brought about a shift to traditional workforce patterns. And with the ever-increasing quality of edge computing use cases and the data requirements these implementations have, a shorter control loop is necessary to satisfy the need for near real-time responsiveness.
As such, 5G is a network infrastructure that can support and enable the increasing complexity and specialization of edge computing.
Also read: Best Enterprise 5G Network Providers 2022
Benefits of the Relationship Between 5G and Edge Computing
Ultra-low latency use cases
Combining 5G and edge computing is critical in attaining ultra-low latency in various edge devices and use cases.
Considering the increasing need for high reliability and ultra-low latency communications for use cases in smart factories, healthcare, intelligent transportation, smart grids, and entertainment and media among others, pairing 5G and edge computing enables such ultra-low latency applications to reach their full effectiveness.
Near real-time performance
Leveraging the combination of 5G and edge computing helps enterprises collect and process massive volumes of real-time data to optimize various operational systems and improve productivity and customer experiences. Enterprises can process and analyze data in the environments that yield the most value.
Carrying out processing and analysis close to where data was created brings enterprises close to near real-time performance for mission-critical applications.
Improved bandwidth usage
The relationship between 5G and edge computing impacts the success of 5G network technology. Edge computing helps ensure 5G is feasible when dealing with millions of devices connected to a 5G network.
In the absence of edge computing, all these devices would be transmitting data directly to the cloud. This would, in turn, push the bandwidth requirement for transmission to the cloud to an overwhelming level and counter the effectiveness of a 5G network.
Advancement of emerging technologies
High-speed connectivity coupled with data processing at the edge is critical for the advancement of technologies, such as artificial intelligence (AI), machine learning (ML), augmented reality (AR), and virtual reality (VR). The advancement of these technologies is important, as they have the potential to revolutionize entire industries and enable boundless innovation as entirely new applications are made feasible.
Specifically, by moving compute closer to data, 5G and edge computing improve the ability to innovate, as this opens up the ability to infuse AI and machine learning into edge solutions, which opens up new possibilities for use cases and business models. It also opens up the possibilities for IoT solutions.
The pairing of 5G and edge computing and their impact on AI, ML, and IoT makes smart cities more feasible and provides a foundation to innovate further as compute and network challenges are minimized.
Another example of an area where limited deployment has the potential to be erased by 5G and edge computing is telemedicine. Industrial automation also stands to benefit from much more effective and creative solutions. Manufacturing could finally realize a truly intelligent and integrated supply chain to improve efficiency.
Also read: 5G and AI: Ushering in New Tech Innovation
- Greater attack surface: As 5G edge use cases become more ubiquitous, the attack surface becomes larger. This is seen as worthwhile to threat actors, as the likelihood of a successful attack is increased.
- Complexity: Enterprises may be drawn to the use cases of the technologies but fail to grasp the regulatory requirements; financial implications; and potential technical issues, such as massive scale, rate of change, and variability.
- Modular Ecosystem: The connectivity ecosystem proves to be challenging to navigate due to its increasingly modular nature. As such, there are many solutions to consider with different costs and varying levels of performance and control.
Enterprise Use Cases
Augmented reality and virtual reality
Enterprise consumers can enjoy more immersive real-time collaboration, as employees in different locations can collaborate on and manipulate the same virtual objects. Smart glasses can also help revolutionize maintenance, repairs, and operations as well as relay instructions to employees using AR to help them correctly carry out tasks.
AR and VR headsets can be used to train new employees. They can learn how to carry out various roles and tasks with minimal errors. 5G edge enables AR and VR in sales and marketing, allowing prospective clients to enjoy immersive virtual previews of products and services. For example, users can enjoy virtual tours of real-world properties or locations as well as virtually try out fashion and cosmetic products.
Edge computing and 5G combine to improve oil and gas, food and beverage, and consumer goods manufacturing. Edge computing can be implemented at distribution and remote pumping sites. These sites can be connected to a main autonomous system using 5G. Infrastructure can also be upgraded to ensure these sites can handle 5G data requirements.
Another edge computing and 5G deployment involves monitoring environmental controls of food and beverage items in transit to maintain the quality of perishable products. Centralized production analytics can be replaced with distributed edge systems in consumer goods manufacturing. These edge systems can use a private network to connect to supply partners.
Moving Forward with 5G and Edge Computing
Enterprises can start by understanding the value and implications of 5G and edge computing from a technological as well as business perspective. They can then identify challenges or opportunities that 5G and edge computing can help them overcome or capitalize on.
At this point, developing a 5G and edge computing strategy will help ensure the intended use cases are aligned not only with the enterprise but also with the technologies. This also helps the enterprises to effectively implement use cases and make sure the technology is naturally evolving with the implementation. | <urn:uuid:492bd62f-b532-40d3-bdf1-5941e6374009> | CC-MAIN-2022-40 | https://www.itbusinessedge.com/networking/5g-and-edge-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00776.warc.gz | en | 0.918345 | 1,445 | 2.75 | 3 |
The DHCP failover protocol provides a method for two DHCP servers to communicate with each other. Depending on the configuration, DHCP failover can provide both redundancy and load balancing by sharing one or more pools between two or more DHCP servers. The servers are known as failover peers.
After placing two DHCP servers in a failover relationship, each DHCP pool’s addresses are divided between primary and secondary servers. Peers don’t need to be located in the same subnet, either. You get both DHCP server redundancy and placement flexibility.
When you configure failover DHCP, keep in mind that:
- DHCP failover is only supported on BlueCat DNS/DHCP Servers (known as BDDSes).
- Enabling or disabling DHCP failover will restart the DHCP service, resulting in a service interruption.
This short video provides a step-by-step guide for how to configure the DHCP failover protocol for BlueCat Address Manager (BAM) and BDDS. A complete transcript of the video also follows below.
Transcript of how to configure DHCP failover
The DHCP failover protocol provides a method for two DHCP servers to communicate with each other. Depending on the configuration, DHCP failover can provide both redundancy and load balancing. DHCP failover works by sharing one or more pools between two DHCP servers.
All right. So, let’s get started. Just a couple of prerequisites to make note of here. I am running Address Manager and BDDS 9.2. I’m logged in as the admin. I’ve created a configuration in Address Manager. As well, I have an IP block and network, and I have a DNS view.
Adding DHCP servers
So, here I have an XHA pair running, providing DNS, but let’s add some DHCP servers and set them into a failover relationship.
All right. So, I’m going to, let’s add some GEN4-5000s, BDDS75. I’m going to call this one dhcp1. I’m going to set my IPv4 address on eth0. I’m going to select .91. I have to enter a hostname. Let’s call this dhcp1.example.com. Now, these are the defaults from the Add Server page. You can leave this one selected to connect to the server. That’s necessary. We’re not going to upgrade now. You have to enter your default password, which is bluecat. That should be changed when you’re first deploying the servers. And let’s click Detect Server Settings.
So, what this does is this verifies in Address Manager the software version of the BDDS, the number of interfaces, and if dedicated management on eth2 is enabled or disabled. For the purposes of this demo video, I have dedicated management disabled.
All right. So there we see the IP on eth0, IPv4. I don’t need the XHA backbone. I could add an IPv6 address. That’s optional. I will leave that out for this demo. And let’s click Add Next. All right. So, let’s add our next DHCP server. I’ll select the same server profile. Let’s call this one dhcp2. I’m going to set my IPv4 address on eth0, and let’s set a host name. And once again, we will set our default server password and click Detect Server Settings.
All right. So let’s add this server. All right. Excellent. So there we see dhcp1 and dhcp2.
Setting DHCP deployment options at the server level
Now, a really important part of configuring DHCP failover is setting DHCP deployment options at the server level. So, this is where I am. So, I’m going to do this right now.
Okay. So, I’m going to click through to a server. In the top here, you see associated roles, diagnostics, deployment options. So, I’m going to click there. Now, you want to select a DHCP Service Option. I’m not using DHCPv6. So the v4 is simply called DHCP Service Option. Now, with DHCP failover, there are five DHCP deployment options that need to be configured. By clicking on this dropdown, there are many options. Thankfully, all the DHCP deployment options are clustered together.
So, the first one we’re going to start with is Load Balance Override. Right here. Let’s set that to three seconds. Add Next. Okay. Let’s set our second DHCP deployment option, which is Load Balance Split, and it’s right below the first option that we set. And our default value is 128 seconds. Add Next. All right. Our third option that we need is Maximum Client Lead Time. Again, clustered here. And that recommended default is 1,800 seconds. Okay. And Add Next.
Next option that we need is Max Response Delay. And this one is 60 seconds. And we have one more, Maximum Unacknowledged Updates. And for this one, the recommended default is 10. So, I’ll click Add there. Okay. So, here we see our five deployment options on dhcp1. We need to do the same to dhcp2. Okay. So, now we have our deployment options on dhcp2. Before we can deploy, we need to set our DHCP deployment options. So, let’s click over to IP Space in Address Manager. I’m going to click into this block. Okay.
Selecting the primary and secondary server and deployment
So. here we see Deployment Options, Deployment Roles. I’m going to click Deployment Roles. And here you want to select DHCP Role. Okay. From the Add DHCP Role page, you have this dropdown here. The default is Master, also known as primary. Let’s select the Server Interface. So, I’m going to select dhcp1 as the primary DHCP server. I’ll click Add. And now we select the secondary for DHCP failover. So I’ll click Select Secondary Server Interface. I’m going to select dhcp2. And I will click Add. All right.
And lastly, to ensure that these changes go through, I’m going to select both of these servers, and I’m going to Deploy. And as you can see from the Confirm Server Deploy page, I’m deploying DHCP only to these two servers. The BAM UI shows the progress. Again, deployment length can vary depending on the size of your configuration. And that’s it.
For more detailed information on DHCP fail over, please refer to the Address Manager Administration Guide. There’s a lot of good stuff in there, including details on DHCP failover states. | <urn:uuid:cf61f407-f427-4a78-8001-719bb5d9ce65> | CC-MAIN-2022-40 | https://bluecatnetworks.com/resources/how-to-configure-dhcp-failover/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00776.warc.gz | en | 0.881419 | 1,469 | 2.765625 | 3 |
A new set of vulnerabilities with an aggressive name and their own website almost always bodes ill. The name FragAttack is a contraction of fragmentation and aggregation attacks, which immediately indicates the main area where the vulnerabilities were found.
The vulnerabilities are mostly in how Wi-Fi and connected devices handle data packets, and more particularly in how they handle fragments and frames of data packets. As far as the researcher is aware every Wi-Fi product is affected by at least one vulnerability.
The researcher that uncovered the Wi-Fi vulnerabilities, some of which have existed since 1997, is Mathy Vanhoef. The vulnerabilities he discovered affect all modern Wi-Fi security protocols, including the latest WPA3 specification. You may remember Vanhoef as one of the researchers behind the KrackAttacks weaknesses in the WPA2 protocol. As Vanhoef puts it:
“it stays important to analyze even the most well-known security. Additionally, it shows that it's essential to regularly test Wi-Fi products for security vulnerabilities, which can for instance be done when certifying them.”
In each network, there is a maximum size to the chunks of data that can be transmitted on a network layer, called the MTU (Maximum Transmission Unit). Packets can often be larger than this maximum size, so to fit inside the MTU limit each packet can be divided into smaller pieces of data, called fragments. These fragments are later re-assembled to reconstruct the original message.
Wi-Fi networks can use this packet fragmentation to improve throughput. By fragmenting data packets and sending more, but shorter frames, each transmission will have a lower probability of collision with another packet. So, if the content of a message is too large to fit inside a single packet, the content is spread across several fragments, each with its own header.
Just like packets, frames are small parts of a message in the network. A frame helps to identify data and determine the way it should be decoded and interpreted. The main difference between a packet and a frame is the association with the OSI layers. While a packet is the unit of data used in the network layer, a frame is the unit of data used on the layer below it in the OSI model’s data link layer. A frame contains more information about the transmitted message than a packet.
The researcher found several implementation flaws that can be abused to easily inject frames into a protected Wi-Fi network. These vulnerabilities can be grouped as follows:
- Some Wi-Fi devices accept any unencrypted frame even when connected to a protected Wi-Fi network.
- Certain devices accept plaintext aggregated frames that look like handshake messages.
- Worse than those, some devices accept broadcast fragments even when sent unencrypted.
Design flaws in the Wi-Fi feature that handling frames
- The frame aggregation feature of Wi-Fi uses an "is aggregated" flag that is not authenticated and can be modified by an adversary.
- Another design flaw is in the frame fragmentation feature of Wi-Fi. Receivers are not required to check whether every fragment that belongs to the same frame is encrypted with the same key and will reassemble fragments that were decrypted using different keys.
- The third design flaw is also in Wi-Fi's frame fragmentation feature. When a client disconnects from the network, the Wi-Fi device is not required to remove non-reassembled fragments from memory.
A few other implementation vulnerabilities that can be used to escalate the flaws mentioned above.
Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). Although each affected codebase normally receives a unique CVE, the agreement between affected vendors was that, in this specific case, using the same CVE across different codebases would make communication easier.
The design flaws were assigned the following CVEs:
- CVE-2020-24588: Aggregation attack (accepting non-SPP A-MSDU frames).
- CVE-2020-24587: Mixed key attack (reassembling fragments encrypted under different keys).
- CVE-2020-24586: Fragment cache attack (not clearing fragments from memory when (re)connecting to a network).
Implementation vulnerabilities that allow the trivial injection of plaintext frames in a protected Wi-Fi network were assigned these CVEs:
- CVE-2020-26145: Samsung Galaxy S3 accepting plaintext broadcast fragments as full frames (in an encrypted network).
- CVE-2020-26144: Samsung Galaxy S3 accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network).
- CVE-2020-26140: Alfa Windows 10 driver for AWUS036H accepting plaintext data frames in a protected network.
- CVE-2020-26143: Alfa Windows 10 driver 1030.36.604 for AWUS036ACH accepting fragmented plaintext data frames in a protected network.
Other implementation flaws are assigned the following CVEs:
- CVE-2020-26139: NetBSD forwarding EAPOL frames even though the sender is not yet authenticated.
- CVE-2020-26146: Samsung Galaxy S3 reassembling encrypted fragments with non-consecutive packet numbers.
- CVE-2020-26147: Linux kernel 5.8.9 reassembling mixed encrypted/plaintext fragments.
- CVE-2020-26142: OpenBSD 6.6 kernel processing fragmented frames as full frames.
- CVE-2020-26141: ALFA Windows 10 driver for AWUS036H not verifying the TKIP MIC of fragmented frames.
On the dedicated site the researcher states that
“experiments indicate that every Wi-Fi product is affected by at least one vulnerability and that most products are affected by several vulnerabilities.”
The statement is based on testing more than 75 devices, which showed they were all vulnerable to one or more of the discovered attacks.
To mitigate attacks where your router's NAT/firewall is bypassed and devices are directly attacked, you must assure that all your devices will need to be updated. Unfortunately, not all products get regular updates.
Using a VPN can prevent attacks where an adversary is trying to exfiltrate data. It will not prevent an adversary from bypassing your router's NAT/firewall to directly attack devices.
The impact of attacks can also be reduced by manually configuring your DNS server so that it cannot be poisoned.
Graveness of the vulnerabilities
We have been here before. When the KRACK vulnerabilities were revealed a few years ago some people treated it as if it was the end of Wi-Fi. You'll have noticed it wasn't. That doesn't mean it was nothing, either, but a little perspective goes a long way.
The CVEs registered to the FragAttacks have been given a medium severity rating and have CVSS scores sitting between 4.8 to 6.5. Which indicates that the chances of anything resembling remote control is probably too difficult to achieve to make it attractive. The data stealing options however are more imminent and could well be used in specific attacks.
Proof is in the pudding
If you are interested, you can find a demo and a link to a testing tool on the dedicated website. You can also find some FAQs and a pre-recorded presentation made for USENIX Security about these vulnerabilities.
Stay safe, everyone! | <urn:uuid:36d3f33a-8351-4562-84f7-dc66569bb761> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2021/05/fragattack-new-wi-fi-vulnerabilities-that-affect-basically-everything | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00776.warc.gz | en | 0.915684 | 1,587 | 3.265625 | 3 |
If you are browsing online, you will likely encounter a wide range of threats, some of which could lead to your bank account being emptied or sensitive information being exposed and your accounts being compromised. Then there is ransomware, which could be used to prevent you from accessing your files should you not have backups. DNS filtering offers an easy way to protect against these threats.
The majority of websites now being created are malicious websites, so how can you stay safe online? One solution deployed by businesses and ISPs is the use of a web filter, with DNS filters one of the best choices for filtering the Internet. A DNS-based web filter can be set up to restrict access to certain categories of Internet content and block most malicious websites.
While it is possible for companies or ISPs to purchase appliances that are located between end users and the Internet, DNS filters allow the Internet to be filtered without having to buy any hardware or install any software. So how is DNS filtering operated?
How Does DNS Filtering Work?
DNS filtering – or Domain Name System filtering to give it its full name – is a technique of preventing access to certain websites, webpages, or IP addresses. DNS is what permits easy to remember domain names to be used – such as Wikipedia.com – rather than typing in IP addresses – such as 188.8.131.52. DNS maps IP addresses to domain names.
When a domain is bought from a domain register and that domain is hosted, it is given a unique IP address that allows the site to be found. When you try to access a website, a DNS query will be carried out. Your DNS server will look up the IP address of the domain/webpage, which will permit a connection to be made between the browser and the server where the website is hosted. The webpage will then be opened.
So how does DNS filtering operate? With DNS filtering set up, rather than the DNS server returning the IP address if the website exists, the request will be subjected to certain security measures. If a particular webpage or IP address is recognized as malicious, the request to access the site will be denied. Instead of connecting to a website, the user will be sent to a local IP address that will display a block page explaining that the site cannot be opened.
This control could be implemented at the router level, via your ISP, or a third party – a web filtering service provider. In the case of the latter, the user – a business for example – would point their DNS to the service provider. That service provider keeps a blacklist of malicious webpages/IP addresses. If a site is known to be malicious, access to malicious sites will be prevented.
Since the service provider will also group webpages, the DNS filter can also be implemented to block access to certain categories of webpages – pornography, child pornography, file sharing websites, gambling, and gaming sites for example. Provided a business sets up an acceptable usage policy (AUP) and sets that policy with the service provider, the AUP will be live. Since DNS filtering is low-latency, there will be next to no delay in logging onto safe websites that do not breach an organization’s acceptable Internet usage policies.
Can a DNS Filter Prevent Access to All Malicious Websites?
Sadly, no DNS filtering solution will stop access to all malicious websites, as in order for this to be accomplished, a webpage must first be identified as malicious. If a cybercriminal creates a brand-new phishing webpage, there will be a delay between the page being set up and it being reviewed and added to a blocklist. However, a DNS web filter will prevent access to the majority of malicious websites.
Can DNS Filtering be Avoided?
Proxy servers and anonymizer sites could be deployed to mask traffic and bypass the DNS filter unless the chosen solution also prevents access to these anonymizer sites. An end user could also manually amend their DNS settings locally unless they have been locked down. Determined persons may be able to find a way to bypass DNS filtering, but for the majority of end users, a DNS filter will block any effort to access forbidden or harmful website material.
No single cybersecurity solution will let you to block 100% of malicious websites but DNS filtering should definitely form part of your cybersecurity operations as it will allow most malicious sites and malware to be blocked. | <urn:uuid:4aa806b5-2eb4-49b1-a8da-9694bc5aa917> | CC-MAIN-2022-40 | https://www.arctitan.com/blog/is-dns-filtering-an-effective-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00776.warc.gz | en | 0.932385 | 883 | 3.375 | 3 |
One of the most frequently asked questions for security experts is, “How many lumens do I need for outdoor security lighting?” It is, in fact, a pertinent question that has complex answers depending on your home security setup. This article will answer this question by addressing what lumens are and how they work. Next, we’ll discuss how lumens alone don’t offer a complete picture for light effectiveness.
What Are Lumens, Anyway?
A lumen is a standard unit of light emitted per second. Lighting manufacturers often rate the brightness of their lights in lumens. However, some manufacturers rate their lights in watts, which might cause some confusion. It’s important to note that watts are a measure of energy consumed per second; an incandescent bulb might consume 100 watts and give off 1600 lumens, while LED lights are much more efficient. For example, the Iluminar WL643-2 LED Illuminator outputs 3712 lumens while only consuming 48W of power.
How Many Lumens Do I Need For Outdoor Security Lighting?
So, we can see that light output isn’t measured in watts, as that is the amount of power the light consumes while outputting a certain number of lumens. That said, how many lumens is the right amount?
To determine this, you must consider the amount of light you need for your application. If you want to know how many lumens are necessary for security lighting, consider the size and shape of the area you want to illuminate. A good rule of thumb is that security lights should be much brighter than general-purpose lighting: look for lights that offer at least 700 lumens.
At the same time, you don’t want your lights to be too bright! Having thousands of lumens projecting right into your neighbor’s windows is not a great idea. Extremely bright lights can also cause problems for birds and wildlife, interfering with their sleep behavior and consuming more power than necessary.
How To Choose Security Lighting
A good security lighting manufacturer will test their lights thoroughly. This means they can offer you data that helps you make a better purchasing decision. Here are some data points every reputable manufacturer should mention:
- Power consumption (in watts, W) along with voltage (in volts, V) and current (in amperes, A)
- Light output in lumens, commonly shortened to lm.
- Color temperature, measured in kelvin (K). Warmer, yellower lights have a lower temperature, while brighter lights are higher. Halogen lights commonly measure around 3000K, while 6500K is a bright white.
- Color rendering index, or CRI. Only high-quality manufacturers are likely to mention their CRI, which measures how accurately the light allows you to view colors. Low CRI lights can make colors appear dull or washed out, while high CRI lights are most accurate to how you would see an object during the day.
Iluminar specializes in the manufacture of superior, professional-quality LED lighting solutions for license plate capture systems, as well as video surveillance. We also offer unmatched customer service for all of our clients.
Our invisible and visible series of products all come in various form factors, with many angle choices, wavelengths, range, and power ratings available to choose from. Our illuminators come with covert and semi-covert options capable of running off AC or DC power. Best of all, we offer weatherproof systems with exceptional build quality to survive even the most brutal deployment application.
Our product experts will personally guide you towards choosing the right product for your application. Don’t hesitate to get in touch with us at (281) 438-3500 today! | <urn:uuid:9d35ba3b-4175-4876-b246-c1b05ad8d55f> | CC-MAIN-2022-40 | https://www.iluminarinc.com/how-many-lumens-do-i-need-for-outdoor-security-lighting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00776.warc.gz | en | 0.937485 | 773 | 2.671875 | 3 |
It was Isaac Asimov who introduced 'The Three Laws of Robotics' for a short story in 1942 (although, fictionally they were published in the ‘Handbook of Robotics, 2058 AD’). The Laws were developed to show that robots would work in harmony with humans, and stated:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws are powerful and have moved from science fiction toward science fact, as well as being referred to in many books and films. However, the laws are based on an assumption: Robots are either artificially intelligent, or sentient.
This is why Asimov's 'Three Laws' are not suitable for autonomous technologies, and in this case the 'driverless car' which has intelligence, but is not in itself intelligent. The laws for cars (or rules, as I’ll refer to them moving forward), need to be different – because a car can only make decisions based around configured, albeit dynamic and adaptive, rules, and so the onus falls to manufacturers for ensuring that autonomous cars are as safe on the road as they are exciting to drive.
The connected car is coming. In fact some may argue that it's already here since we have GPS and 3G/4G providing a flow of information, and next-generation voice-activated controls or updates. Recently I even saw that someone has worked out how to install an Amazon Dot into their cup holder! With the advent of 5G this level of integration will only become more prevalent, transforming the driving experience for everyone, whether you are a casual Sunday driver or a mile-munching long-haul trucker.
Much of the security focus so far has been on protecting the car, and its occupants, from would-be hackers or car-thieves who want to control the technology with malicious intent. For autonomous cars to gain global safety and acceptance means a new set of rules must be developed, rules that ensure accountability for development of smart and autonomous vehicles is placed with manufacturers and their supply chain.
The UK government is seeking to take a leadership role in the development of these rules by contributing an Autonomous and Electric Vehicle bill which will create a new insurance framework for self-driving cars. In tandem, the UK Department for Transport and Centre for the Protection of National Infrastructure have released a series of documents outlining principles of cyber security for connected and automated vehicles.’These documents form a modern version of Asimov’s Robotic Laws, but with the focus being on the automotive manufacturers to ensure that these vehicles are developed with a defense-in-depth approach so that they remain resilient to threat at all times – even in situations where sensors are unable to respond due to attack or failure.
This legislation will put the United Kingdom at the centre of these new and exciting technological developments, while ensuring that safety and consumer protection remain at the heart of an emerging industry.
Consistent innovation of network technologies – throughput, latency, coverage, and cost – will be necessary to underpin the self-driving and autonomous cars of the future, but key to all this is that they are also protected against potential cyber attacks. Fair access to these technologies with open and transparent licensing is also important to ensure that all manufacturers can apply the same levels of safety and security; driving both innovation and competition for the future. To make this a reality Juniper Networks is working as a member of the Fair Standards Alliance, which also includes leading European and multinationals such as BMW, Daimler, Hyundai, MINI, VW and Tesla.
Personally, I am excited about the advent of the fully automated car and what it will bring. As a committed Gearhead (Petrolhead, UK) these developments will provide choice about when I want to drive, and when I would prefer to let the car take control. At the same time, it will make the roads a safer place since connected cars will react faster than humans ever can. Knowing that Juniper Networks is supporting these initiatives, and contributing to standards and legislation, is both important and exciting to me – we are making a difference, making science fiction into science reality! | <urn:uuid:38f77ad4-3ef4-4e2c-ade3-8ce480e7be97> | CC-MAIN-2022-40 | https://www.darkreading.com/juniper/do-autonomous-cars-dream-of-driverless-roads- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00776.warc.gz | en | 0.968341 | 896 | 2.625 | 3 |
Stephen Hawking’s ‘A Brief History of Time’ starts with the following excerpt:
“A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, an old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”
While the orbital nature of the solar system is a scientifically established fact, I can’t help but think that the lady in the back of the room was onto something, perhaps in an entirely different context - that of a ‘Brief history of the Cloud’. In this context, an application rests on top of a Virtual Machine that rests on top of a Hypervisor that rests on top of a Server and a fabric of Switches and Storage. If application developers were to consider these layers as abstractions (or… turtles) that are someone else’s problem, could the cloud be just a collection of turtles all the way down that they don’t need to worry about?
While that would have been all nice and dandy, it is far from reality. Carrying this train of thought forward in the real world - If applications rest on top of infrastructure turtles, troubleshooting how apps behave often involves understanding and analyzing the underlying turtles’ behavior (in thorough ways that would make Sir David Attenborough proud). In fact, configuring infrastructure turtles’ behavior remains key to how applications perform, how much they cost, how secure they are, how resilient they are, and how scalable they are.
In the first phase of cloud adoption, most applications were either lifted and shifted to the cloud as is, or were architected as a monolithic application. [ To be clear, this first phase of cloud adoption remains a present reality for many organizations. ]
From an application vs infrastructure perspective, the boundaries in this monolith architecture are more clearly defined - Applications are the processes running inside VMs, while everything else - namely the VMs, VPCs, network subnets, storage volumes, databases etc are infrastructure (“turtles” from the perspective of application development teams).
In this phase of cloud adoption, the definition of what is an application and what is infrastructure is more or less based on which team ends up being responsible for provisioning and operating each layer.
Developers code and test the applications, while infrastructure engineers provision and configure the underlying infrastructure including VMs, networks, storage, databases and their likes, and operate them in ways that would achieve desired application availability, resilience, security and performance goals in line with business goals. If there is a problem, central SRE/Devops teams are responsible for troubleshooting the infrastructure layers, while application developers, fairly unaware of the underlying cloud architecture, troubleshoot problems within their application code. As the cloud and application architecture is relatively static, teams have the internal tribal knowledge about patterns and behaviors manifested in the operational logs and metrics which may be good enough for manually troubleshooting scenarios.
However businesses born today have an ever growing need to accelerate their pace of innovation, enabled by building on top of the cloud since their inception. Enter the second phase of cloud adoption - with microservices, containers, and Kubernetes. In this phase of cloud adoption, teams are either starting to build their apps as cloud-native domain-based micro-services, or breaking apart their monoliths into a bunch of microservices in the hope that this architecture shift will help teams innovate while staying agile and scale the frequency of feature releases to meet the ever evolving needs of their business. Kubernetes has become the de facto operating system or platform of choice for such containerized microservices with its adoption ever-increasing.
While Kubernetes presents many benefits, one of which is adding a layer of abstraction for application developers from underlying cloud infrastructure details, it remains complex to adopt and operate. One of the significant reasons that helps explain this complexity is that the boundaries of defining what is an application and what is infrastructure start to get blurry when applications run on top of Kubernetes.
Kubernetes adds a whole slew of fine-grained abstractions that are more application-centric compared to coarser abstractions like VMs. An application is not only just application code anymore, but also, at the very least, 6 different application resources, namely - Containers, Deployments, Pods, Replicasets, Services, and Ingress. Besides these 6 resources, there are additional resources that define application secrets and environment variables as ConfigMaps and Secrets. Service accounts, Roles and Role bindings define permissions for what an application pod can and cannot do in a shared cluster environment, Sidecars are used for initializing application environments and all other sorts of initialization tasks.
If an application is stateful as opposed to being stateless, there are additional resources like Statefulsets, PersistentVolumeClaims and PersistentVolumes that define the storage attributes for the stateful application. It does not end there. Enter CustomResourceDefinitions - which are extensible resources within Kubernetes that may also influence behavior of an application. As an example, if an organization adopts a service mesh for application traffic visibility and traffic management capabilities, additional resources in the form of Sidecars and CustomResourceDefinitions are added by the service mesh deployment that help tune application traffic behavior.
These resource configurations are more “turtles” that cannot be ignored by application development teams, even though they are not, in a sense, application code. In the Kubernetes world, YAML and JSON raise up their hands saying they are code too and will not be ignored!
Troubleshooting 5xx error codes in Kubernetes applications for application development teams not only involves looking into recent application code changes but also involves trying to detect things like -
This troubleshooting scenario is very different from the previous monolith scenario where there was an assumed separation of concerns between application and infrastructure teams. When it comes to Kubernetes apps, both Developers and Devops/Platform teams need to have a shared understanding of all the moving parts within Kubernetes. Devops/Platform teams need to create the right support hooks and controls within Kubernetes, thus enabling separation of concerns between teams, so that developers can own the troubleshooting of their own apps which includes dealing with Kubernetes resources mapped to their applications.
Keen observers will note that I haven’t even mentioned the cloud infrastructure layers in the above troubleshooting scenario - in the end, it could be that a peering network connection between the Kubernetes and database VPCs went down - in an entirely separate turtle world.
To me, the challenges seen by Kubernetes teams when troubleshooting and operating their applications can be distilled into three classes:
Kubernetes is inherently a distributed desired-state driven system. Kubernetes resources like containers, deployments, pods, services and so on are created and mapped to an application, however it is often hard to understand which resources map to what application. Kubernetes uses a label-based mechanism to maintain this mapping, however existing tools today do not carry this context over in easy to understand visualizations that can help discover this distributed mapping. Applications are also not monoliths anymore - they are a bunch of services, 3rd party APIs, and data connectors that together encapsulate an application. Imagine how the number of containers, deployments, pods and other resource configurations to be tracked starts to scale as the number of services and APIs in an application grows. In order to find where the problem is, the first hurdle to cross is - which Kubernetes resource is the one that a developer should be looking into from amongst hundreds of distributed resources? This exercise alone often takes many hours of engineering time.
Some resources that Kubernetes creates when an application is deployed may not live as long as certain other more permanent resources. Pods are recreated with different names when the containers or configurations within them are updated. Kubernetes Events that are valuable sources of information for things such as a pod’s health expire after an hour at which point that data is no longer available to analyze for engineering teams. If this important context of why pods were restarted and recreated at 2 am in the night is somehow not preserved in an external place, the data disappears and is unavailable for analysis. The pace of change in Kubernetes applications far exceeds that of monoliths which makes the system-level and application-level operational data and context change rates hard to cope with manually.
Kubernetes makes the process of letting application developers define desired application behavior more explicit and programmable. This means that all aspects of an application behavior - such as it’s security, lifecycle management, CPU and memory resourcing, scaling, health-checks, load balancing, traffic behavior are programmable within a Kubernetes application definition via its many resources like pods, services, and so on. This is a very important distinction from previous VM based platforms, where developers did not have a say in things like - sizing VMs, or how VM traffic was routed, or how DB connections were load-balanced.
While the Kubernetes model of making all system configuration application-centric and desired state driven offers many benefits, it also means that someone has to define those requirements and resources in ways that are correct, consistent across application versions, and optimal. The process of configuring these resources as YAML and JSON remains extremely manual and prone to sprawl, lack of clear ownership between Developer and Devops teams, misconfigurations and even, unknown sub-optimal configurations.
Despite these challenges, I am excited about this new layer of configurable Kubernetes abstractions that helps bridge the gap between applications and infrastructure and that help bring Developer and Devops teams closer by introducing a common, new, even if complex, vocabulary based on Kubernetes resources.
At Operant, we are working on fundamentally new solutions that simplify troubleshooting and commanding Kubernetes native applications. If the cloud and Kubernetes are indeed turtles all the way down, wouldn’t it be nice to get to know them a bit better and understand how to train them in simplified ways so they can help us be fast and innovative, while being resilient, scalable, and secure in a cloud-native world? | <urn:uuid:97f12f13-1c41-49ac-be45-eaf11c99145e> | CC-MAIN-2022-40 | https://www.operant.ai/art-kubed/a-brief-history-of-the-cloud-its-turtles-all-the-way-down | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00776.warc.gz | en | 0.943612 | 2,256 | 2.515625 | 3 |
SOA, or service oriented architecture offers a promising vision—rather than massively complex, monolithic business applications, SOA consist of a series of small applications that each perform a limited function or “service.” For example, rather than a single application providing online banking services, an SOA version of the same might have an application that manages customer logins, another that pulls account balances, and yet another application that creates funds transfers and so on.
If you have ever taken an introductory programming course, this concept probably sounds familiar. Students are admonished to make their code “modular”, with each module performing a single, discrete function so that the module may be used again. Even to the non-programmer, this seems like simply good thinking. Why recreate the same functionality over and over again?
Superficially, SOA seems like modular programming on a larger scale, the big difference with SOA is there is a strong emphasis on modular business process functionality, rather than just isolating repeated technical functions.
At its core, a process is about doing something with data. Forget the technical notion of data as bits and bytes, but think of it as something like an invoice. To create an invoice we need to gather various other pieces of information, like a customer name or account number. We also need to manipulate the data a bit, perhaps calculating a late fee or discount on the invoice. We take the data, manipulate it, and spit out new data, in this case our completed invoice.
SOA, done well, sees these data gathering and manipulation events as a service. You need not worry about the underlying technical aspects of each service, as long as you can reliably send data to a service and get back the data you were expecting. An SOA-based application strings together these services in novel ways, and in theory allows you to create new applications by changing how the different services interact.
Imagine your company acquires another. In theory, you can rearrange each company’s services in the right way, you can easily integrate your accounting and invoicing systems.
Getting On Board
So, why isn’t everyone doing SOA? The simple answer is it’s incredibly hard. Think of every employee in your company. Does one single person or group understand exactly what information and materials that person needs, what tasks they perform, and what precisely inputs and outputs are required to do their job? SOA requires similar knowledge. Tens of thousands of services must be thoroughly understood, and furthermore there must be a prevalent understanding of how each of these services interact with and depend on other services.
While wholesale implementation of SOA is likely cost prohibitive, getting into an “SOA mindset” allows for gaining some benefit from SOA without breaking the bank. When implementing new business applications, think of the data manipulation that is required and the associated business processes. Seek to break those processes down into component services that can be reused across applications. Eventually you will have a pool of services that can be rearranged or combined in novel ways; creating entirely new applications without the cost of implementing the associated functionality. | <urn:uuid:979451f3-e375-427b-908c-992a70d0c7ef> | CC-MAIN-2022-40 | https://cioupdate.com/getting-into-the-soa-mindset-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00776.warc.gz | en | 0.948206 | 642 | 2.5625 | 3 |
Infrastructure in IT refers to the computers and servers that run code and store data, as well as the networking resources. For example, servers, hard drives, and routers are all part of the infrastructure. Before cloud computing, most companies organized infrastructure in their offices and ran all of the applications on-premise.
With Infrastructure-as-a-service, companies no longer need to set up their servers. Instead, they rent the necessary capacities from a cloud provider. A cloud provider is responsible for housing, operating, and maintaining the infrastructure equipment in their data centers. Users get access to it over the Internet using the pay-as-go model. Customer organizations can use cloud infrastructure to create and host Web applications, store data, and anything else that they usually do with traditional on-premises, but often more flexibly.
That is what is meant by IaaS in cloud computing. Let’s explore what benefits this model brings to business
Characteristics of IaaS
IaaS is a flexible and dynamic solution for organizations that are looking to address their IT infrastructure needs with a cloud model. Below are the most important characteristics of IaaS and its pricing model:
Resources as a Service. Instead of the costs associated with building and managing a server room or data center, the cloud model allows organizations to access and deploy IT infrastructure with a more affordable subscription-based payment model.
Automation of administrative tasks. Organizations that manage in-house data centers are responsible for all routine tasks – updates, patches, and maintenance – which affect the availability of hardware resources and applications. IaaS providers handle updates and maintenance of their servers without affecting the availability of infrastructure for customers.
Pay-as-you-go billing model – IaaS services are provided on-demand. This makes it a cost-effective option for organizations because they have to pay only for the computing resources they use. An IaaS provider may bill based on the number of virtualization instances created or the amount of data stored. Some cloud providers may charge additional fees for managed services.
Scalability. The ability to scale quickly and easily is the main advantage of IaaS. Cloud providers own data centers with pools of servers and storage that can be allocated to a customer on demand. This makes it much cheaper and easier for subscribers to scaling IT infrastructure compared to deploying in-house infrastructure.
Why is IaaS important?
IaaS provides access to advanced equipment and services such as processors, storage systems, and networking equipment which many businesses cannot afford on-site or will not be able to access that easily.
With the cloud, companies can scale up or down their infrastructure as needed, paying only for what they use either on an hourly, daily, or monthly basis.
Cloud infrastructure reduces the time and cost of building testing and development environments, giving IT teams more freedom to experiment.
Adopting IaaS enables IT departments to build efficient workflows. Instead of spending a lot of time managing and maintaining local infrastructure, IT staff can devote more time to tasks aimed at business development through technology.
Who uses IaaS?
IaaS is a useful solution in situations when a business needs various additional IT infrastructure components (servers, data storage, software, etc.), but it is expensive, inefficient, or simply impossible to ensure the physical availability of these components.
For example, a company needs different amounts of IT resources at different times. As a rule, the growing need for additional services does not last long and is not regular.
When a company is rapidly growing, introducing new technologies and services, expanding its business lines, it inevitably faces continuous infrastructure scaling. In this case, IaaS can simplify and accelerate this task.
IaaS is becoming extremely popular across all industries, and the spectrum of its use is expanding. DevOps teams, system and database administrators, full-stack developers are among the main users.
Initially, IaaS was used for temporary or experimental workloads. Today, large enterprises adopt this model to support their mission-critical workloads.
In addition, future-oriented organizations are moving their data centers to the cloud to be able to innovate faster and stay competitive in the market. By taking advantage of IaaS, they free up their resources to innovate and grow business. | <urn:uuid:3c6ae53b-e7aa-4d47-bdf9-bdfffc064529> | CC-MAIN-2022-40 | https://www.cloud4u.com/blog/cloud-computing-infrastructure-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00776.warc.gz | en | 0.944911 | 882 | 3.015625 | 3 |
While it goes by many names like mainboard, system board, and sometimes lovingly, mobo, motherboard is the most common term for the printed circuit board that holds all your computer components in one place like a Lego baseplate. In addition to holding them, your motherboard allows your components to communicate and gives them life by routing power from the Power Supply Unit (PSU). Some motherboards also add value with onboard audio, video, WiFi, LAN, and other features.
What is the main function of the motherboard?
Your motherboard’s primary function is to support all the components that form your computer. If you’re comparing it to the human body, the motherboard is like the backbone, nervous system, and circulatory system all-in-one. It physically supports different components like a backbone, acts as a control center like a nervous system, and moves voltage like a circulatory system. For a geekier analogy, the motherboard is like the black lion from Voltron, serving as the torso that brings all the other parts together.
Why is it called a motherboard?
It’s called a motherboard because it’s the main circuit board. Much like the term “mothership," the word motherboard signifies its essential nature. Additional circuit boards can be plugged into a motherboard, and these are known as “daughterboards.”
What components connect to a motherboard?
- CPU (Central Processing Unit): The CPU processes instructions from software and communicates with other parts like the GPU. It's the star component on the motherboard.
- RAM (Random Access Memory): A computer’s memory is a temporary space that holds data for faster access.
- Sound cards: While most modern motherboards have onboard sound, some audiophiles prefer to install sound cards for better quality and more sound channels.
- Graphics card: A modern graphics card is a complex piece of hardware that hosts the GPU. It’s responsible for rendering graphics from games and videos from films.
- Storage drives: A computer’s hard disk drive (HDD) and solid-state drive fit inside the case and connect to the motherboard. You can read up on the HDD vs SSD debate to see which one is right for you.
- Optical drives: While optical drives have gone out of fashion in the age of high-speed Internet, many computer users still use CD, DVD, or Blu-Ray drives.
How many ports does a motherboard have?
The number of ports, connectors, and slots on a motherboard depends on its make and model. Although there is no standard number, most basic motherboards have at least two connectors, four USB ports, and a couple of expansion slots. You can check your motherboard’s manual for more information.
How does a motherboard hold components?
A motherboard has sockets for components like processors, memory sticks, and expansion cards. Other devices like hard drives connect to a motherboard but usually fit in the computer case. Only compatible components will fit on a motherboard. For example, an Intel processor won't fit on a motherboard designed for AMD CPUs. Additionally, a motherboard will only support a specific range of processors.
What is the motherboard on a laptop?
Laptop motherboards are just like their desktop counterparts but typically smaller and thinner. Besides notebooks, mobile devices like phones and tablets also have motherboards that hold their processors and memory. These motherboards are prone to damage from falls, so it's critical to handle them carefully.
How does a motherboard affect computer performance?
People who use computers for basic applications like browsing the Internet or writing emails won't notice a significant difference between low-end and high-end motherboards that support the same components. But premium motherboards offer significant features for enthusiasts:
- Sophisticated overclocking tools that help CPUs gain higher clock speeds
- Better build quality that results in greater longevity for the computer
- Multiple expansion slots for advanced hardware
- Cutting edge onboard WiFi and LAN support for fast Internet connectivity
- Advanced audio ports for top sound reproduction
- Video ports with the latest DisplayPort and HDMI options
- Memory support for the fastest RAM
- Numerous rear and front USBs for convenient access
- 5G capabilities
What motherboard do I have?
- Press the Windows + R on your keyboard to open the Run window.
- Type msinfo32.
- Press Enter.
- You’ll see your motherboard manufacturer’s name next to BaseBoard Manufacturer.
- You’ll see your motherboard’s model name next to BaseBoard Product.
What to look for in a motherboard
- Size: Motherboards come in different form factors like ATX, Micro-ATX, and mini-ITX. Pick the right size for your computer case.
- Chipset: Whether Intel or AMD makes your CPU, you'll need a motherboard compatible with your processor’s chipset and socket.
- Overclocking: You don’t need to spend extra money on a motherboard with overclocking capabilities if you're happy with stock clock performance. But find a good motherboard for tweaking if you want to squeeze more speed out of your processor.
- Expansion slots: Look for a motherboard with multiple expansion slots if you plan to install graphics cards, sound cards, LAN cards, and SSD cards.
What is TPM 2.0 on motherboards?
TPM (Trusted Platform Module) is a tamper-resistant technology that creates and stores cryptographic keys for encryption. Microsoft requires computers to have TPM 2.0 for Windows 11 security to fight against ransomware attacks and firmware hacks. Thankfully, most modern motherboards have TPM 2.0 and can support Windows 11. | <urn:uuid:f11eb40c-1e7e-476e-baab-c4c2f40b7025> | CC-MAIN-2022-40 | https://www.malwarebytes.com/computer/what-is-a-motherboard | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00776.warc.gz | en | 0.905262 | 1,171 | 3.609375 | 4 |
What is Malicious Text? What should I watch out for?Be wary of simple random text messages that have unknown links attached in the body— especially from people you don’t know. According to some reports, “10 percent of all malicious emails sent today contain viruses, worms, ransomware, trojans, spyware, or adware.” While text messages tend to be safer than emails, you don’t want to completely let your guard down.
Unique Cases of Malicious TextEven if you don’t open any links, it’s still possible for your phone to glitch up in response to a text. Let’s see how this has happened in the past. In 2018, a bug with Apple devices was discovered where devices would shut down if they received texts containing the Indian language character “జ్ఞా.” Phones would continue to shut down and behave sporadically as long as the symbol remained in the chat history of the phone. At the time, there were even malicious individuals who took advantage of the bug and sent the symbol to as many people as possible. It was noted in the article that “back in 2015, a string of text was unearthed that could disable iMessage, while a year later, a five-second video was sent around by users to crash iPhones.” So, errors like this aren’t a one-time thing. It’s 2019 now and these specific bugs no longer exist, but these problems could still resurface in another form. In the case of similar events happening again, look for help from friends and search the internet on a computer to find answers, if possible. In most cases, a solution can be found.
Tips on How to Protect Your iPhone From Malicious Text Messages & Hackers
- Don’t Jailbreak your iPhone—it makes it easier for hackers to breach your phone because you will lose several security layers, as well as any new security updates. In addition, Apple can’t help you with problems regarding a Jailbroken phone, so stay clear of it.
- IOS updates & Backing up your Data: When you see that you have an update on your phone, always download and install it. iPhone updates can include fixes that improve the security of the device and prevent malicious programs/scripts from being run.
- Don’t click that link: If you are feeling suspicious about a link, then don’t click on it! Be attentive to the context of the message containing the link to ensure that you aren’t clicking on something that will leave you with issues. Ignore all unknown text messages and immediately delete them. And if the suspicious message came from a contact, find them in-person and question them about the text. | <urn:uuid:35bae6ad-dade-44da-9db1-1fbaf73763da> | CC-MAIN-2022-40 | https://www.corcystems.com/insights/spotting-malicious-texts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00176.warc.gz | en | 0.944531 | 581 | 2.859375 | 3 |
This article is contributed. See the original author and article here.
Hello developers :waving_hand:! In this article, we introduce our project “Plant AI :shamrock:” and walk you through our motivation behind building this project, how it could be helpful to the community, the process of building this project, and finally our future plans with this project.
Plant AI :shamrock: is a web application :globe_with_meridians: that helps to easily diagnose diseases in plants from plant images using Machine Learning available on the web. We provide an interface on the website where you can upload images of your plant leaves. Since we focus on plant leaf diseases we can detect the plant’s diseases by seeing an image of the leaves. We also provide users easy ways to treat the diagnosed disease.
As of now, our model supports 38 categories of healthy and unhealthy plant images across species and diseases. See the complete list of supported diseases and species can be found here. If you are want to test out Plant AI, you can use one of these images.
Guess, what? This project is also completely open-sourced:star:, here is the GitHub repo for this project: https://github.com/Rishit-dagli/Greenathon-Plant-AI
The motivation behind building this
Human society needs to increase food production an estimated 70% by 2050 to feed an expected population size that is predicted to be over 9 billion people . Currently, infectious diseases reduce the potential yield by an average of 40% with many farmers in the developing world experiencing yield losses as high as 100%.
The widespread distribution of smartphones among farmers around the world offers the potential of turning smartphones into a valuable tool for diverse communities growing food.
Our motivation with Plant AI is to aid crop growers by turning their smartphones into a diagnosis tool that could substantially increase crop yield and reduce crop failure. We also aim to make this rather easy for crop growers so the tool can be used on a daily basis.
How does this work?
As we highlighted in the previous section, our main target audience with this project is crop growers. We intend for them to use this on a daily basis to diagnose disease from their plant images.
Our application relies on the Machine Learning Model we built to identify plant diseases from images. We first built this Machine Learning model using TensorFlow and Azure Machine Learning to keep track, orchestrate, and perform our experiments in a well-defined manner. A subset of our experiments used to build the current model have also been open-sourced and can be found on the project’s GitHub repo.
We were quite interested in running this Machine Learning model on mobile devices and smartphones to further amplify its use. Using TensorFlow JS to optimize our model allows it to work on the web for devices that are less compute-intensive.
We also optimized this model to work on embedded devices with TensorFlow Lite further expanding the usability of this project and also providing a hosted model API built using TensorFlow Serving and hosted with Azure Container Registry and Azure Container Instances.
We talk about the Machine Learning aspect and our experiments in greater detail in the upcoming sections.
To allow plant growers to easily use this Plant AI, we provide a fully functional web app built with React and hosted on Azure Static Web Apps. This web app allows farmers to use the Machine Learning model and identify diseases from plant images all on the web. You can try out this web app at https://www.plant-ai.tech/ and upload a plant image to our model. In case you want to test out the web app we also provide real-life plant images you can use.
We expect most of the traffic and usage of Plant AI from mobile devices, consequently, the Machine Learning model we run through the web app is optimized to run on the client-side.
This also enables us to have blazing fast performance with our ML model. We use this model on the client-side with TensorFlow JS APIs which also allows us to boost performance with a WebGL backend.
Building the Machine Learning Model
Building the Machine Learning Model is a core part of our project. Consequently, we spent quite some time experimenting and building the Machine Learning Model. We had to build a machine learning model that offers acceptable performance and is not too heavy since we want to run the model on low-end devices
Training the model
We trained our model on the Plant Village dataset on about 87,000 (+ augmented images) healthy and unhealthy leaf images. These images were classified into 38 categories based on species and diseases. Here are a couple of images the model was trained on:
We experimented with quite a few architectures and even tried building our own architectures from scratch using Azure Machine Learning to keep track, orchestrate, and perform our experiments in a well-defined manner.
It turned out that transfer learning on top of MobileNet was indeed quite promising for our use case. The model we built gave us the acceptable performance and was close to 12 megabytes in size, not a heavy one. Consequently, we built a model on top of MobileNet using initial weights from MobileNet trained on ImageNet .
We also made a subset of our experiments used to train the final model for public use through this project’s GitHub repository.
Running the model on a browser
We applied TensorFlow JS (TFLS) to perform Machine Learning on the client-side on the browser. First, we converted our model to the TFJS format with the TensorFlow JS converter, which allowed us to easily convert our TensorFlow SavedModel to TFJS format. The TensorFlow JS Converter also optimized the model for the web by sharding the weights into 4MB files so that they can be cached by browsers. It also attempts to simplify the model graph itself using Grappler such that the model outputs remain the same. Graph simplifications often include folding together adjacent operations, eliminating common subgraphs, etc.
After the conversion, our TFJS format model has the following files, which are loaded on the web app:
- model.json (the dataflow graph and weight manifest)
- group1-shard*of* (collection of binary weight files)
Once our TFJS model was ready, we wanted to run the TFJS model on browsers. To do so we again made use of the TensorFlow JS Converter that includes an API for loading and executing the model in the browser with TensorFlow JS :rocket:. We were excited to run our model on the client-side since the ability to run deep networks on personal mobile devices improves user experience, offering anytime, anywhere access, with additional benefits for security, privacy, and energy consumption.
Designing the web app
One of our major aims while building Plant AI was to make high-quality disease detection accessible to most crop growers. Thus, we decided to build Plant AI in the form of a web app to make it easily accessible and usable by crop growers.
As mentioned earlier, the design and UX of our project are focused on ease of use and simplicity. The basic frontend of Plant AI contains just a minimal landing page and two other subpages. All pages were designed using custom reusable components, improving the overall performance of the web app and helping to keep the design consistent across the web app.
Building and hosting the web app
Once the UI/UX wireframe was ready and a frontend structure was available for further development, we worked to transform the Static React Application into a Dynamic web app. The idea was to provide an easy and quick navigation experience throughout the web app. For this, we linked the different parts of the website in such a manner that all of them were accessible right from the home page.
Once we can access the models we load them using TFJS converter model loading APIs by making individual HTTP(S) requests for loading the model.json file (the dataflow graph and weight manifest) and the sharded weight file in the mentioned order. This approach allows all of these files to be cached by the browser (and perhaps by additional caching servers on the internet) because the model.json and the weight shards are each smaller than the typical cache file size limit. Thus a model is likely to load more quickly on subsequent occasions.
We first normalize our images that is to convert image pixel values from 0 to 255 to 0 to 1 since our model has a MobileNet backbone. After doing so we resize our image to 244 by 244 pixels using nearest neighbor interpolation though our model works quite well on other dimensions too. After doing so we use the TensorFlow JS APIs and the loaded model to get predictions on plant images.
Hosting the web app we built was made quite easy for us using Azure Static Web Apps. This allowed us to easily set up a CI/ CD Pipeline and Staging slots with GitHub Actions (Azure’s Static Web App Deploy action) to deploy the app to Azure. With Azure Static Web Apps, static assets are separated from a traditional web server and are instead served from points geographically distributed around the world right out of the box for us. This distribution makes serving files much faster as files are physically closer to end users.
We are always looking for new ideas and addressing bug reports from the community. Our project is completely open-sourced and we are very excited if you have feedback, feature requests, or bug reports apart from the ones we mention here. Please consider contributing to this project by creating an issue or a Pull Request on our GitHub repo!
One of the top ideas we are currently working on is transforming our web app into a progressive web app to allow us to take advantage of features supported by modern browsers like service workers and web app manifests. We are working on this to allow us to support:
- Offline mode
- Improve performance, using service workers
- Platform-specific features, which would allow us to send push notifications and use location data to better help crop growers
- Considerably less bandwidth usage
We are also quite interested in pairing this with existing on-field cameras to make it more useful for crop growers. We are exploring adding accounts and keeping a track of images the users have run on the model. Currently, we do not store any info about the images uploaded. It would be quite useful to track images added by farmers and store information about disease statistics in a designated piece of land on which we could model our suggestions to treat the diseases.
Thank you for reading!
If you find our project useful and want to support us; consider giving a star :star: on the project’s GitHub repo.
Alexandratos, Nikos, and Jelle Bruinsma. “World Agriculture towards 2030/2050: The 2012 Revision.” AgEcon Search, 11 June 2012, doi:10.22004/ag.econ.288998.
Hughes, David P., and Marcel Salathe. “An Open Access Repository of Images on Plant Health to Enable the Development of Mobile Disease Diagnostics.” ArXiv:1511.08060 [Cs], Apr. 2016. arXiv.org, http://arxiv.org/abs/1511.08060.
Howard, Andrew G., et al. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv:1704.04861 [Cs], Apr. 2017. arXiv.org, http://arxiv.org/abs/1704.04861.
Russakovsky, Olga, et al. “ImageNet Large Scale Visual Recognition Challenge.” ArXiv:1409.0575 [Cs], Jan. 2015. arXiv.org, http://arxiv.org/abs/1409.0575.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC. | <urn:uuid:15293192-2085-4a14-aea3-f9933f84fedf> | CC-MAIN-2022-40 | https://www.drware.com/plant-ai-student-ambassador-green-a-thon-activity-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00176.warc.gz | en | 0.928744 | 2,489 | 2.890625 | 3 |
5G and private networks will have a major impact on the future of the healthcare industry. Because of this, it is important to become familiar with 5G and how it will help save lives by enabling faster response times, better sharing of patient information, and increased data security.
Today, there are several limitations that connected healthcare solutions face, including:
The ability to transmit data at much higher speeds with greater security will fuel countless possibilities within the healthcare ecosystem. The leading benefits of 5G and private networks for healthcare include:
There are five primary areas in which 5G is making strides.
The use of wearables in healthcare has rapidly increased in recent years. These devices help patients and healthcare providers monitor important biometric data while also providing faster response times in emergencies. Some of these devices include:
Connected ambulances are the future of emergency response. 5G can enable doctors and paramedics to collaborate in real-time even when they are miles apart. These connected ambulances provide more information about patients and their health history faster than ever before, playing a critical role in transforming how emergency services are delivered.
During the COVID-19 pandemic, drones were used for remote virus testing and to safely deliver medical supplies. These drones were mainly used to help underserved communities around the world, but 5G in healthcare will play a critical role in ensuring these kinds of use cases remain connected in the future, especially within cities.
Hospitals around the country are arming nurses with employee safety devices or “panic buttons” in response to nationwide reports of violence in hospitals. Three in ten nurses who took part in a survey reported an increase in violence where they work, stemming from factors including staff shortages and more visitor restrictions. Panic buttons must always stay connected, and both 5G and private networks within hospitals will ensure these devices function as the lifeline they are intended to be.
Demand for private networks and 5G in healthcare is increasing. Private LTE/5G Networks have enhanced security features, and many hospitals and medical campuses will utilize these private networks to ensure data security and HIPAA compliance. Private LTE/5G networks are typically deployed as a replacement for WiFi, which lacks the level of enhanced security that is necessary for transmitting protected health and personal data over the Internet.
The COVID-19 pandemic gave us a glimpse into how the healthcare industry will continue to evolve by leveraging innovative technologies. It also brought to light the importance of monitoring and treating patients through remote or virtual care. 5G and private networks will play a critical role in the future of healthcare by connecting everything from wearables to emergency services and panic buttons to drones, hospitals, and medical campuses. By reducing latency, improving reliability, and increasing security, new healthcare use cases will be unlocked and benefit from the availability of 5G and private networks.
It is critical that healthcare organizations tighten up the security of medical devices by securing…
Energy harvesting offers significant potential in the healthcare field to help both patients and practitioners…
Augmented reality apps and glasses can streamline navigation, item picking, barcode reading, and synchronization in… | <urn:uuid:3a19105f-d4a0-422c-bf02-aa41cbc193fa> | CC-MAIN-2022-40 | https://www.iotforall.com/revolutionizing-healthcare-with-5g/amp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00176.warc.gz | en | 0.954062 | 627 | 2.859375 | 3 |
NASA said Wednesday Northrop-built SharkSat is intended to test and demonstrate multiprocessor systems, digital receivers, integrated circuits and other electronic components to facilitate the development of a Ka-band software defined radio that could have potential applications in 5G, space-to-ground and space-to-space communications.
David Schiller, who served as a principal investigator for SharkSat, said the payload will gather and transmit telemetry data back to Earth for analysis.
“In this case, the telemetry data will provide insight into the health and functioning of the electronic components of SharkSat,“ Schiller added.
Cygnus took off in October as part of the 14th ISS resupply mission. The spacecraft will deorbit once it completes the SharkSat demonstration and dispose of cargo from the space station prior to making its re-entry into Earth's atmosphere. | <urn:uuid:45739bd4-a79d-46db-86b3-bd6f27e8521c> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2021/01/northrops-cygnus-to-kick-off-sharksat-tech-demo-upon-departure-from-iss/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00176.warc.gz | en | 0.925875 | 180 | 2.65625 | 3 |
As the cloud continues to expand, it's taken a lot of forms. From the virtual partitions on mainframes to virtualization, cloud services and mobile technologies, the cloud is actually composed of diverse platforms and approaches. One of the new applications that will be closely related to cloud technologies is the Internet of Things.
The Internet of Things builds on the concept of the original Internet, which was (and still is) composed of networking nodes (servers and computers) all linked together via global networks, and which is used for everything from data transfers and stock quotes to web surfing and media streaming. But the IoT isn't about computer communications in the traditional sense; rather, IoT is about data collection from a much larger range of simple devices or sensors that communicate specific data to a centralized or semi-centralized collection point, and can receive simple commands back from that central source.
Compared to the more traditional networking nodes, sensors have less compute and storage power, but there are many of them -- orders of magnitude more. This raises the age-old question of having the compute and storage resources centralized vs. distributed. What will be needed to support IoT?
The applications for IoT are enormous. In healthcare, a range of sensors attached to any number of patients can transmit data about that patient to a central management console, alerting doctors and nurses when certain conditions are detected. In the home, IoT applications are essentially already in use, where smart appliances like a refrigerator can signal the need for new water filters, and thermostats can both report temperature data and receive commands to adjust the temperature. And in agriculture, things like sensors on cows can report everything from the cow’s body temperature to signs of stress or disease.
With so many potential applications possible, the question arises as to what kind of architecture will be required to support such diverse uses with such a high volume of sensors. In the early days of computing, terminals connected to a mainframe and were dependent on that mainframe to be of any use. Later, the development of the personal computer negated the need for a mainframe. But the advent of the Internet re-introduced the client-server model and the centralized vs. distributed discussion again. And now, cloud services have essentially taken us back to reconsider our architectures.
But for IoT applications, a client-server approach may not be the best model. For example, in many areas, the sensors may be required to make some decisions locally, or communicate with more than one external source based on the data collected. And once self-driving cars come online in a big way, any IoT-type devices onboard those cars will likely need to make some decisions instantly, without waiting to be told what to do by the centralized compute source.
At the same time, a fully distributed model will also fail, as the cost and technical constraints that the sensors have will not allow heavy computing and storage resources to be integrated into the sensors.
Creating the tight network
Also in question is the underlying network architecture that will support IoT applications. To listen to certain wireless carriers, IoT is custom-built for wireless technologies. And, to a certain degree, they may be right. But even with the lower rates these carriers are charging for IoT traffic, that approach may not work. There are many situations where the sensors and devices in question may not be able to receive a consistent wireless signal, or security concerns may compel a given company to keep their IoT traffic segmented and away from wireless networks.
The key is then to intelligently slide between the two extremes, as it fits to the specific IoT application. As an example, a hybrid approach, where a localized IoT wide area network (WAN) gateway has more communication, compute and storage resources may work better as opposed to a pure centralized or a pure distributed approach. Imagine an IoT WAN gateway that bridges the thousands of sensors in a farm to the cloud, as opposed to each sensor trying to connect to the Internet directly. A key criteria is to leverage modern WAN orchestration technologies to support the networking needs of the IoT application.
So, as this new IoT tide begins to build, we as an industry will need to look at the different scenarios, challenges and opportunities that develop over time, and decide how best to provide the network architectures that will best support the Internet of Things. Clearly, cloud-based services are here to stay, and IoT adoption will only increase over time. All of this begs the question: Is the client-server vs. distributed debate settled or just beginning? | <urn:uuid:7067cb70-c597-47fd-9240-1c1ef6b3e2d2> | CC-MAIN-2022-40 | https://www.networkcomputing.com/networking/iot-centralized-vs-distributed-architectures | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00377.warc.gz | en | 0.94675 | 922 | 3.203125 | 3 |
Multiple Sclerosis Diagnosis - Signs and Symptoms of Multiple Sclerosis
Multiple sclerosis is often referred to as MS and it’s a very serious disease. If you have a family member with this condition, then you know how difficult and painful it can be. This medical condition causes the immune system to eat away the covering that protects the nerves in the body. This makes it difficult for the brain to communicate with the rest of the body and eventually the nerves can completely deteriorate.
Multiple sclerosis is classified as an autoimmune disease because the immune system destroys myelin, which is the fatty substance protecting the nerves in the brain and spinal cord. It’s not known what causes a person to have Multiple sclerosis but genetics and infections that a person has as a child could be contributing factors.
Women are more susceptible to multiple sclerosis than men and it usually occurs between the ages of twenty and forty although, anyone at any age can get Multiple sclerosis. It’s a hard disease to diagnose because the symptoms can be intermittent. It also affects different nerves making it more difficult to trace the source of the pain.
Multiple Sclerosis Symptoms
Multiple sclerosis (MS) is a chronic and frequently disabling disease that attacks the central nervous system. According to the National Multiple Sclerosis Society, the condition affects about 400,000 Americans and is the most frequent cause of neurological disability other than trauma in people from ages 20 to 50. Symptoms vary among patients and can range from mild to severe and may cause numbness of the limbs, paralysis and even loss of vision. The progression and severity of the disease are unpredictable. Symptoms of Multiple sclerosis include but are not limited to the following:
- Lack of coordination
- Double vision or pain in the eye
- Blurred or loss of vision in one eye
- Numbness in limbs
- Tingling pain in different parts of body
- Electric-shock sensation with head movements
Multiple Sclerosis Diagnosis
Multiple sclerosis can not be cured but there are medications that can help to reduce the symptoms. Therapy is usually given to help a person learn how to perform strength exercises and to correctly stretch the muscles.
Since it’s not known what causes multiple sclerosis, there is no way to prevent this disease. If you have signs and symptoms of Multiple sclerosis, it’s suggested you see your doctor for an examination. It’s vital that you give them complete details of the problems you are having in order for them to determine what may be causing your symptoms. | <urn:uuid:4b0e2911-8efb-44ca-946d-d41e43868e97> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/765/multiple-sclerosis-diagnosis-signs-and-symptoms-of-multiple-sclerosis.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00377.warc.gz | en | 0.943014 | 522 | 3.390625 | 3 |
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
[Editor's Note: This podcast was produced by Guy Clinch for No Jitter. The text transcript and images are below for readers to follow along with the audio.]
Click Arrow Below to Listen
In 2001 education expert Marc Prensky coined the term, "digital native." He wrote that K-12 students, “Have spent their entire lives surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age.” He identifies these children as a subculture “radically” distinct from previous generations; even from those of us who may consider ourselves as “Digital Immigrants.”
Prensky shows that the differences are more profound than simply digital fluency. He says digital natives, “Think and process information fundamentally differently.” He backs this up referring to the work of Dr. Bruce D. Perry of Baylor College of Medicine who wrote, “Different kinds of experiences lead to different brain structures.”
In this post I will contend that within the digital native subculture there is a further distinction that of the Mobile Natives. I will further conjecture that the mobile native subculture will bring dramatic and lasting changes for enterprise organizations.
SFGate.com quotes a study as part of the 2010 Pew Internet and American Life Project giving us some demarcation to the beginning of the mobile natives’ generations. The survey showed that 75% of those ages 12 to 17 have cell phones. 60% of these individuals reported that they had their first phone before age 14. 30% of the 12 year olds surveyed owned a cell phone by age 10.
HuffPost blog contributor Rebecca Jackson tells of how her teenage son is so often, “transported someplace else,” in response to the ping of the iPhone in his pocket. Researcher Sarah Collins has shown that this is more than just a Pavlovian response. She draws from an American Academy of Pediatrics study to write that younger generation’s social and emotional development has been shaped by the fact that they, “have everything at their fingertips and disposal in a matter of seconds."
Defining where this is all leading will be up to the social sciences to eventually determine. The research seems to be underway. Google the term, “cell phones and youth culture" and you get to a two and a half million responses. I’m far from qualified to know what the results of this research will be. It’s clear to see, though that something big is up and that this kind of a seismic cultural and social change will have multifaceted impacts. Enterprise organizations will be swept up with that change.
Source: The Always-On Consumer - Experian Marketing Services
For companies wishing to do business with these individuals some of the statistics are startling. Compared to the three percent of Talkers or the 12% of Occassionals who expressed that they want to use their mobile phones for in-store purchases, a full 77% of the Prodigy category is all in.
This shift is causing great challenges for retailers. The nature of the experience of interacting with the product on the shelf has changed. The retailer once could consider the consumer as a captive once they had them within the walls of the brick-and-mortar. Shopping around involved costs to the consumer. The physical store put some parameters around pricing when the consumer needed to weigh the costs incurred such as leaving the store and traveling down the street to the competitor. There have been whole categories of retailers who having mastered the art of ambiance allowing them to demand higher prices because the perceived qualitative difference of the shopping experience.
Today the price tag in the store has to be the best price available. The skyrocketing vacancy rates in shopping malls is in part due to the reality of the mobile native. The ability to shop a product online, while in a physical store, removes many of the advantages a physical store once provided to the retailer. If the product can be found cheaper, barring the impact of some other attribute of the shopping experience such as a distinctive service level unique to the brick-and-mortar vendor, the mobile native will “Showroom.”
Retail is but one example of the impacts on business of the mobile native generations. The mobility trend in general is presenting enterprise organizations with great opportunities and challenges. From selling to, employing and servicing, mobility is changing all types of relationships.
As I struggle to find an interesting way to end this post, the trite expression, “May you live in interesting times,” keeps popping to mind. As I often do, I became curious of the origin of this term. I Googled to learn the etymology of the phrase is enigmatic.
Many erroneously attribute the quote to Confucius. In his travels in China, Nicolas Kristof found few who ever heard the expression. Others explain that the original meaning may adverse to a gesture of optimism or blessing; it was they contend a curse, “heaped upon an enemy.” The more I think about this the more I find this saying to be a fitting analogy to the challenges that enterprise organizations face in coming to terms with the mobile native generations.
As the origin of the “interesting times” phrase seems lost to history, so too does the solution to the mobile native culture for the enterprise organization remain obscured by the future. | <urn:uuid:bd0a1cd8-ecc0-436d-a726-89fb9a519415> | CC-MAIN-2022-40 | https://www.nojitter.com/mobile-natives | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00377.warc.gz | en | 0.951996 | 1,172 | 3.03125 | 3 |
What are the most popular types of cyber attacks? The answer changes constantly.
By identifying the most popular attacks in Q2 of 2016 in the chart below, the Security Engineering Research Team (SERT) has taken the guess-work out of knowing what you’re up against.
Popular variations of a web application attack for this quarter include SQL injection, aka SQLi (45% of all web app attacks) and cross-site scripting (XSS).
SQLi occurs when an attacker inserts (or ‘injects’) a malicious SQL statement into a form on the targeted website. By doing this, the cyber attacker can get information from the company’s database, including customer information and credit card data.
Cross-site scripting is a web application vulnerability that hackers can exploit to push scripts and other information onto the pages of a victim’s website.
In order to prevent cyber attacks such as these, app developers must know how to secure and maintain their code. Adding a captcha or a web application firewall can also help ward off hackers.
Malware attacks come in all shapes and sizes, from viruses and worms to spyware and ransomware. No company is completely safe from malware.
Your best bet on minimizing your risk of a malware infection is to educate your users on being network-security savvy. Don’t open suspicious links or emails, and if you do, report it to the right people ASAP.
Application-specific attacks are exactly like their namesake. These cyber attacks target specific applications depending on the results of packet sniffing, which captures all of the data packets traveling through an application.
By using a packet sniffer, hackers can get information about a potential victim such as what operating systems they use, typical network traffic, and other applications and programs in use.
Attackers are then able to up their success rates by tailoring their approach to a specific vulnerability in a specific application.
DoS (Denial of Service) attacks happen when a hacker overloads and/or crashes a server by overpowering it with a multitude of requests.
DDoS (Distributed Denial of Service) attacks are similar, but are conducted with a larger network, typically known as a botnet.
Simple ways to combat DDoS/DoS cyber attacks are to filter traffic by region and protocol, detect flow anomalies, deploy dedicated DDoS mitigation, and update your disaster recovery plan with your client.
If you store and manage your own data, there are a number of ways to protect yourself from an attack.
There are two types of reconnaissance attacks: passive and active.
Passive reconnaissance attacks are when an attacker looks for private information without engaging with the victim’s systems. Active reconnaissance happens when the hacker does engage with the victim’s system.
Sometimes both active and passive reconnaissance are called passive, since neither are actually exploiting a victim, but are instead are collecting data in preparation for a larger attack.
Preventing these attacks can be as easy as having a strong firewall and IPS in place. | <urn:uuid:89a5857d-6703-4973-bef5-c8c92a22edf2> | CC-MAIN-2022-40 | https://www.calyptix.com/reports/top-5-cyber-attack-types-in-2016-so-far/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00377.warc.gz | en | 0.936576 | 623 | 2.90625 | 3 |
If you’re an ISV that doesn’t develop Ed Tech applications related to curriculum, you may think the education vertical market doesn’t hold any opportunity for you. If you look at a school system or university more like businesses in other vertical markets you serve, however, you may see many opportunities.
Take a look at the following examples of how emerging technologies are making their mark on education, but also how they can make an impact on the business of running a school.
Virtual Reality and Augmented Reality
VR puts users inside a virtual world. AR overlays virtual objects in the user’s world. It’s easy to see how these applications can aid learning. The possibilities are only limited by the places students would want to go or things they’d like to see — to experience life in another country or back in history, to travel inside a volcano or into space, or examine organs inside a human body.
But VR and AR also have practical applications for schools. Teachers’ time is limited and often, unfortunately, leaves little room for addressing multiple learning styles within a classroom. AR or VR can enhance learning for students who learn better by seeing, interacting, or (virtually) doing so they can excel along with students who learn from more traditional classes where lessons are spoken or read. This can improve outcomes without hiring additional staff. In universities, medical schools or trade schools, VR and AR can be a cost-effective, practical way to provide “hands-on” training.
You may immediately picture robot teachers, but AI has more practical applications in education. Educators can use AI to analyze data for insights into student progress or the effectiveness of teaching tools. In addition, virtual assistants powered by AI can handle some of the more mundane tasks during the day to allow educators to focus on students and outcomes. It may also equip schools with better assistive tools for students with disabilities.
A blockchain is a distributed database that’s hosted by multiple computers and that’s continuously shared and synced. All users’ identities are validated and a user can’t modify a record without the other participants knowing it. Blockchain may have value in the classroom for instruction and test-taking, but blockchain is a prime choice for managing diplomas, certificates, and grades. It has the potential to save hours of time answering requests from prospective employers and other authorized parties to confirm credentials —and it can also eliminate the problem of false credentials.
Internet of Things technology connects devices, sensors, people, places, and other things to create a connected network that shares data and supports visibility and automation. IoT can support interactive, personalized learning, enabling individual students to access what they need, whether it’s remedial help or enrichment.
It seems that the most intriguing benefits of IoT for a school district or university campus, however, is enabling a smart school environment. A connected system can monitor and maintain resources, conserve energy, secure a school or campus with automated locks and other physical security measures, validate student ID, and even provide real-time visibility into attendance and student whereabouts.
A Word about Securing Technology for Education
The applications you develop for education need to have built-in security. The education vertical is a target for cyber attack. In March, hackers attacked more than 144 universities in the US, along with 176 schools in other countries and 47 private companies to steal 3 TB of intellectual property valued at about $3 billion. The hackers used spear phishing to entice university officials to click on links or enter information. The security solutions you develop or build into your applications for education need to focus on ways to keep IP, student privacy, and school networks safe.
Solve a Problem, Show ROI
You may be hesitant to approach prospects in the education vertical due to concerns over budgets, the need for board approval, or competition. But, as CompTIA points out, schools are still looking for the benefits technology has to offer. Study your target audience and its pain points and offer solutions that will show quick ROI — measured in dollars and cents or in better outcomes for students.
Your solution may not help kids master geometry, but it may be just what the school needs to solve an administrative problem, automate day-to-day processes, or improve security. What can your ISV offer the education vertical? | <urn:uuid:23822fe4-ba0e-4370-bbff-d5019a544a70> | CC-MAIN-2022-40 | https://www.devprojournal.com/industry/education/technology-for-education-develop-applications-for-the-business-side-of-schools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00377.warc.gz | en | 0.943985 | 900 | 2.78125 | 3 |
NIST 5G Research Aims to Aid in Millimeter Wave Antenna Design, Slashing Potential Costs
‘The National Institute of Standards and Technology (NIST)has devised a new way of evaluating and selecting optimal antenna designs for 5G phones, other wireless devices and base stations. The NIST 5G research could help boost 5G wireless network capacity and slash costs, the institute said.
Some 5G systems will use higher, millimeter-wave frequency bands, but transmissions at these frequencies lose plenty of energy along the way, weakening received signal strength. One solution to combat this signal loss is “smart” antennas that can form unusually narrow beams and rapidly steer them in different directions.
NIST 5G Research
According to NIST, its millimeter wave antenna work is the first detailed measurement-based study of how antenna beamwidth and orientation interact with the environment to affect millimeter-wave signal transmission. NIST measurements covering a broad range of antenna beam angles are converted into an omnidirectional antenna pattern covering all angles equally. The omnidirectional pattern can then be segmented into narrower and narrower beamwidths, enabling users to evaluate and model how antenna beam characteristics are expected to perform in specific types of wireless channels.
NIST says its new measurement-based method enables system designers and engineers to evaluate the most appropriate antenna beamwidths for real environments.
“Our new method could reduce costs by enabling greater success with initial network design, eliminating much of the trial and error that is now required,” NIST engineer Kate Remley said in a prepared statement. “The method also would foster the use of new base stations that transmit to several users either simultaneously or in rapid succession without one antenna beam interfering with another. This, in turn, would increase network capacity and reduce costs with higher reliability.”
An engineer could use the method to select an antenna that best suits a particular application. For example, the engineer may choose a beamwidth that is narrow enough to avoid reflections off certain surfaces or that allows multiple antennas to coexist without interference. | <urn:uuid:4994417a-a940-47ef-b2cd-9d6e5a6c8be1> | CC-MAIN-2022-40 | https://finleyusa.com/nist-5g-research-aimsto-aid-in-millimeter-wave-antenna-design-slashing-potential-costs%EF%BB%BF/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00377.warc.gz | en | 0.920704 | 426 | 3.296875 | 3 |
The process industry has spent billions of dollars on robots and robotic automation, but are these monoliths flexible enough for today’s manufacturing requirements, will they eventually take our jobs and will the human-less factory really become a reality?
its not new now its just different
Robot and human workforces are not new:
The earliest known industrial Robot, conforming to the ISO definition was completed by “Bill” Griffith P. Taylor in 1937 and published in Meccano Magazine, and widely accepted and in use in 1970s
While our ancestors have been around for about six million years, the modern form of humans only evolved about 200,000 years ago.
Lights out automated factories are not new, in fact they date back to the turn of the century, automated production lines operated by robots like at FANUC who have been operating a ‘lights out’ factory since 2001 with robots building robots unsupervised for as long as 30 days.
Process manufacturers too have been using robots and automation for decades to enable factories to minimize time and maximize yield and quality.
In the last 20 years I have seen automated machines, that can:
- measure and sort individual grains of rice,
- grade fruits and vegetables by size, color and ripeness,
- select and reject discolored or misshapen chips,
- fillet fish,
- peel pineapples,
- peel and shape carrots,
- provide cameras above lines for label verification
The list is long, and process manufacturers have been at the forefront of robotic technology because they had to. Small changes in yield, speed or quality has a dramatic effect on the bottom line, so robotic automation helped to provide that consistency of product and throughput.
SO, what’s changed?
Waste, Yield and Variety
Things are different now. Waste, flexibility and variety are taking center stage in the products that consumers demand and that’s where the robotic automation begins to struggle in many of the process manufacturing sectors.
What’s the big problem with robots? Well, robots have limitations…
Robots are not cheap. With automation lines running into the hundreds of thousands of dollars and the task they perform is usually highly-focused and inherently inflexible and designed around a range of products. With variety on the rise, different packaging options are now often supported by human intervention as part of the process to overcome line inflexibility.
Handling irregular shaped products or raw materials is often mitigated by using humans to first orientate the product for the Robot to perform its task. However, advances in gripper technology, machine learning and artificial intelligence will inevitably help to rectify some of the weaknesses of today’s Robots.
There are examples of success with lettuce peeling robots, but a closer look at the numbers isn’t so exciting 27 seconds to peel and only 50 percent success. A breakthrough nonetheless.
The other problem is in many situations robotic lines are a compromise to yield. For example, hand filleting a fish produces 6 percent more yield than machine filleting. Preparing vegetables is often much worse as the product shape varies, but the machine settings are often static, cut deeper remove more to ensure quality and consistency!
Robots are also very powerful and fast moving and need to be kept separated from Humans, in cages or in isolated areas using proximity sensors for the health and safety of the workforce
So if these monolith solutions will not offer the flexibility and dexterity, what’s wrong with taking a “just throw human labor at it” approach? Well, we cost more. While initial investment is low, perhaps a coat, hat and gloves, maybe a mobile phone, after that it’s a wage bill, paying for humans to not be at work and even to go on holiday? Humans also suffer from fatigue and need to have breaks. There is also an anticipated shortage of workers, which is predicted to get worse.
what’s the solution?
Cobots were invented in 1996 by J. Edward Colgate and Michael Peshkin, professors at Northwestern University. A 1997 US patent filing describes cobots as “an apparatus and method for direct physical interaction between a person and a general purpose manipulator controlled by a computer.”
Cobots or Human/robot collaboration (HRC) is becoming more prevalent as agile lightweight robots with dexterity and flexibility enter the market. These cobots are less dangerous than classic caged robots as their power is low and their speed of motion slower. The benefit is that these robots do not need to be in cages but can be integrated into the production environment.
Cobots are perfect for carrying out monotonous and ergonomically adverse tasks, doing this repetitively without making mistakes.
Typical fields of application are:
- handling of materials and products between different process stages
- pick, place and deposit applications
- ‘learn to follow’ applications, where the Cobot is taught to move along a predefined motion path, for example, when cutting or decorating pastries and cakes.
So Cobots have the advantage of the robot. They don’t get bored, but they have a much lower initial cost and are flexible and can be moved around and configured for different tasks. They don’t have to be kept in cages and they can work alongside humans. Cobots are becoming the most cost-effective solution for today’s flexible factories. They can perform the repeatable part of the process with accuracy and speed and the humans can manage the dexterity tasks, a marriage of bio and mechanical technology working in harmony.
If you enjoyed this blog then please look out for others at IFSBlogs.
I welcome comments on this or any other topic concerning process manufacturing.
Connect, discuss, and explore using any of the following means: | <urn:uuid:2063d6ab-e513-44c5-b522-039795e8c0c0> | CC-MAIN-2022-40 | https://blog.ifs.com/2019/03/cobot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00377.warc.gz | en | 0.955566 | 1,217 | 2.59375 | 3 |
The Covid-19 pandemic has affected multiple areas of our lives, so it was probably inevitable that this event would also be leveraged by cyber criminals in their insidious attempts to extort money from the public. Many cybersecurity professionals knew this was coming, and slowly, with Covid-19 appearing in the names of phishing and other types of cyber tricks, their suspicions were confirmed.
However, even after almost 12 months into this pandemic, it is probably too early to determine the full impact of Covid-19’s influence on the world of cybersecurity. There are still too few data to form a clear picture, even though most of us have received at least one scam email containing Covid-19 in the title. The real question is how many of those emails have been effective in eliciting responses from those to whom they were sent and how many were either caught by virus-tracking software, or manually deleted by informed recipients. The continuing campaign of impressing on Internet users not to click on links inside suspicious emails is a well-worn crusade, and it would now be unusual to hear of people opening suspicious emails, but sadly it does still happen.
However, beyond the slew of Covid-19-related emails, what else is happening? To answer this question, Cynet, a leading global supplier of cybersecurity solutions has produced a report on this topic. Click here to view a copy of this report, which contains an analysis of cybersecurity attacks they have witnessed across Europe and North America throughout 2020. It details the attack surfaces across various industries including an increased focus on the targeting of specific personnel in a company, normally called spear phishing. A variation of this attack is called “whaling” where the attacks target senior managers and directors in a company. Hackers use email as the attack vector of choice, using this channel to install malware on company networks.
Cynet’s report indicates that over 50% of cyberattacks use email distribution as a means of access with the balance attributed to weaponized documents that contain macro features such as Microsoft Word and Excel files. When those documents are opened, macros run automatically, installing the target malware. The report adds that while traditionally the number of new techniques applied by hackers was 20% of all cyberattacks, during the current pandemic, the number of novel attacks has almost doubled to around 35%. This presents a significant challenge to those companies and services in the business of mitigating such attacks.
A further observation noted in the Cynet report is the increased incidence of companies and businesses turning to their own and external cybersecurity teams to monitor and report attacks on their networks, which has increased by two to three times pre-pandemic. This could possibly be as a result of the constant media reports of large companies being brought to a standstill by skilled and ruthless hackers. Or perhaps the general perception of cybersecurity has changed from being a niche concern and activity to going mainstream.
On the negative side, many businesses still do not possess adequate protection against determined cyberhackers and they are essentially sitting ducks for such attacks. But many have recognized the extant threats and have turned to external Managed Detection and Response (MDR) services that can provide 24×7 cyber defense cover. Others have allocated increased funds to their own Extended Detection and Response (XDR) teams for a faster, more controllable, and less expensive solution. Essentially, a combination of both of these channels is recommended for comprehensive cybersecurity protection.
Click here to read Cynet’s Covid-19 report. | <urn:uuid:a3712ef3-c856-4554-9ae1-0a993463e2a5> | CC-MAIN-2022-40 | https://www.cynet.com/blog/covid-19s-impact-on-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00377.warc.gz | en | 0.978843 | 718 | 2.78125 | 3 |
You can hardly open any technology or business journal, website, or newspaper today without hearing some commentary on cloud computing-what it is and how it will change IT and business. Cloud computing's impact will continue to be felt for many years regardless of how it all comes together in the end. At the same time no single definition of cloud computing exists or is being talked about, planned, and even implemented in today's enterprise networks. Defining cloud computing and notions of "the cloud" are extremely ambiguous and difficult to nail down. Few vendors are willing to step beyond the marketing hype and "cloud washing" to present a perspective of what true cloud computing represents, what currently exists, what is missing, and the characteristics required for enterprise adoption of this dynamic and powerful change in computing ideology.
Cloud computing represents not a revolution but an evolution of existing enterprise computing architectures, dating back to the first instance of networked computing. The difference is that today there are vast advances in virtualization in nearly every aspect of the data center. There has also been an emergence of a dynamic understanding and need to control what, how, and when the cloud provides services to the consumers of those services. This new dynamic paradigm must be able to intercept application and data traffic, interpret the current context, and instruct the cloud infrastructure on how to most efficiently deliver the request.
The cloud is a holistic ecosystem of components, not a point product or single vendor solution, and has basic, specific requirements to meet the needs of enterprise organizations. These requirements include scalability, adaptability, extensibility, and manageability. In addition, the cloud must exhibit additional capabilities that address the best-in-class requirements of the enterprise-such as providing for security, real-time availability, and performance.
The question that remains, however, is what does this new dynamic computing architecture look like and what is required-above and beyond the standard tools we have today-to qualify as a "cloud"?
An important strategic consideration is the integration of all the pieces of the infrastructure to create the cloud. This includes everything from the bare metal to the users to all of the elements in between. In addition, there are different ways to view the interaction of various operations within the architecture, depending on your role.
The cloud computing architecture is built upon several functional component blocks (for example, compute resources or deployment environments), which are organized into specific layers of a pyramid. The width of these layers represents the depth of technical expertise required to build and/or deploy that layer. The pyramid layers are roughly synonymous with the notions of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). At the apex of the pyramid are users accessing the applications; in the center is a dynamic control plane that traverses all others and provides real-time connectivity, information coordination, and flow control between the layers.
In order to maximize the value of cloud architecture, each component must exist in some state or another. For example, dynamic control plane elements are a requirement at every layer of the cloud architecture in order for cloud environments to be operationally efficient and on demand. This requires a level of automation and orchestration that can only be achieved by integrating components across the architecture. Without this capability, organizations cannot realize the benefits associated with the model. If any of the core components is not implemented, such collaboration will fail and true cloud architecture cannot be achieved.
The components required to build the cloud are similar to the components required to build a traditional architecture; the difference is how they integrate, communicate, and act.
Certainly, a fairly strong argument can be made that web-hosting services from a decade ago represent the first implementations of software as a service (SaaS). Some will even argue that it represented the first platform as a service (PaaS) solution as well-providing an HTML platform to build custom applications. Many of the same people will suggest that the rack-n-power providers of the dot-com boom provided infrastructure as a service (IaaS) solutions.
Cloud computing has evolved from a single server being provisioned for a single customer, to a hosting provider and then to a business continuity and disaster recovery provider. Along the way, technology overcame physical limitations with devices like load balancers, WAN optimization, compression, caching, and content delivery networks (CDNs). We learned to integrate devices: built APIs, consolidated racks of servers in racks of blades and learned to provide automated provisioning. Cloud architecture is simply the logical conclusion of this decade's long evolution. We now have unprecedented levels of virtualization of hardware, software, network, and storage and are on the verge of putting it all together.
So, what is that final threshold, what is the difference between a cloud and falling short of the cloud?
Traditional traffic and computing systems often break processing into two discrete components: the data plane and the control plane. The data plane is concerned with the basic process of getting data-be it input from a system or, in our case, requests from users-and returning data (output, files, or responses). The data plane is the basic connectivity that handles traffic flow to and from destinations. The control plane, on the other hand, is more concerned with managing that data in response to context and policy; it changes the "how" of the data plane.
The core idea of cloud architecture is to connect users-who might be mobile and moving between LANs, WLANs, and Internet connections-and services to the applications they consume, which can also move between cloud centers based on different needs of the business. As hardware resources and servers are spun-up or decommissioned, as applications are moved from development to production, or as entire applications are moved from the internal data center to a cloud provider, the cloud architecture requires a dynamic control plane that monitors the data and ensures that it is constantly connected in the best possible manner. The dynamic control pane must be able to Intercept traffic as it traverses the cloud, Interpret the data, and Instruct the cloud architecture on how to efficiently connect the user to the appropriate application instance.
The dynamic control plane, must be in a position to have visibility to all traffic between the user and the application and across the entire cloud platform. Without the ability to intercept traffic and data requests, the dynamic control plane cannot appropriately do any of the other things it needs to do. Not only must it be able to see the actual flow, it must also be able to intercept the metadata or context of the traffic. The dynamic control plane must have the visibility into the data plane and all components that operate within the data plane.
Having all the information about data and application flow is not enough. The dynamic control plane must have the ability to understand the elements of context in relation to the individual request, business policy, and other application and cloud traffic. The dynamic control plane must constantly evaluate the context and policy to make intelligent decisions at any given moment.
Once the dynamic control plane has all the available information and analyzed the context, it must instruct the architecture on how best to connect the two endpoints. The dynamic control plane must also communicate with the infrastructure-the data plane-to change the current delivery model to meet the needs identified. This might require sending requests to the a new instance of the application or to a new data center, changing compression and encryption settings, or even instructing other components in the architecture to create or destroy resources necessary to delivering that application or data. It might also be necessary for the dynamic control plane to simply deny access based on the policies and context at any given moment.
These three things necessitate the integration that we alluded to before and underscore the necessity for the inclusion of the dynamic control plane interface at each level and within each component of the cloud architecture. Whether it is a native, purpose-built integration or simply an open standard that can be used by the consumer, the dynamic control plane must be integrated in order to fully intercept, interpret, and instruct or it is not a cloud. The greater the capability to provide these services, the greater the ability for the dynamic control plane to operate intelligently and entirely on its own without manual, human intervention.
Built from core components that include compute resources and management resources, the base layer of the cloud architecture requires the most technical competence to build and deploy. This is the very foundation upon which a cloud is built and, as suggested, is made up of the components most often supplied by vendors who provide IaaS solutions to their customers.
Infrastructure as a Service (IaaS) is a cloud computing model based on the premise that the entire infrastructure is deployed in an on-demand model. This almost always takes the form of a virtualized infrastructure and infrastructure services that enables the customer to deploy virtual machines as components that are managed through a console. The physical resources-servers, storage, and network-are maintained by the cloud provider while the infrastructure deployed on top of those components is managed by the user. Note that the user of IaaS is almost always a team comprised of IT experts in the required infrastructure components.
IaaS leverages the dynamic control plane to enable on-demand scalability through the rapid and automatic provisioning of compute resources. In the case of a virtualized architecture-the most common form of IaaS architecture-this involves the automatic deployment and launch of new instances of a virtual machine. The amount of instances launched will match the amount needed to meet capacity, with the expectation that these instances will be decommissioned as demand decreases.
In this layer of the architecture, each component is responsible for providing actionable data to the other components and performing specific tasks to successfully execute an auto-provisioning or decommissioning scenario.
IaaS is often considered utility computing because it treats compute resources much like utilities (such as electricity) are treated. When the demand for capacity increases, more computing resources are provided by the provider. As demand for capacity decreases, the amount of computing resources available decreases appropriately. This enables the "on-demand" as well as the "pay-per-use" properties of cloud architecture.
Compute resources are one of the most basic components of the cloud-bare-metal resources such as CPU, memory, and disk-that ultimately power applications built within the cloud. This might be a hosting service provider with hundreds or thousands of installed server systems waiting to be used by subscribers or it could be a single blade-chassis with extremely dense resources designed for virtual segmentation.
This layer typically conjures the image of a traditional server. Of course, today's systems are much more complicated and versatile. They can have massive numbers of processing cores and memory that can be carved into virtual systems; auto-provisioning network interface cards that can dynamically be configured from 10MB to multi-gigabit; and both direct attached and network-attached storage systems to support the needs of the application software that will eventually reside on top of it.
Manage resources are the components required to turn bare metal into usable server platforms with the appropriate CPU, memory, and disk resources necessary to support the applications that will be built upon them. Manage resources are also responsible for continuing to monitor the resource needs and ensuring that the application receives all the compute resources it needs-and moving the application or finding additional resources.
This component is most often synonymous with virtual machine management or software provisioning systems which can take the bare metal and apply operating systems, patches, and application logic and apply higher-level network connectivity (IP addressing and more).
It is necessary to share information between the layer of compute resources and manage resources so as not to waste compute resources that could be better used by another application.
Many applications are built on software platforms that run on top of infrastructure services. These platforms can be environments such as Oracle or ASP.NET and provide a convenient way for businesses to build custom applications without worrying about the details that lay beneath the platforms. While many platforms are based on standards-for example Java EE-others are proprietary in nature, including Google AppEngine and architecture frameworks developed and deployed by enterprise architects.
Platform as a Service (PaaS) is a cloud computing model in which a specific development and deployment platform-for example, Java EE, IBM WebSphere, Oracle, Google Apps, .NET, BizTalk-is the basis for deployment. These clouds are proprietary in the sense that only applications developed for the specific platform can be deployed in the cloud.
PaaS is a kind of framework computing in that the platform provided is the core framework in which applications are specifically developed. These applications will not run on any other platform, and often include platform-specific extensions or services, such as Amazon's SimpleDB, that cannot be ported to other environments. The concept of framework computing comes from architectures in which a layer of capabilities and services are provided that abstract (and insulate) developers from the underlying details. This approach leads to more rapid development and deployment of applications.
All platforms require a development environment in which the applications are designed, built, tested, and validated-outside of the production environment. These development environments can be traditional integrated development environments (IDE) that are configured to deploy to resources within a PaaS environment or they can be integrated directly as part of the PaaS offering. Microsoft Visual Studio-derived environments are as capable of connecting to internal as external instances of Microsoft-specific platform, thus enabling "offline" development of applications to be deployed in a PaaS environment. Increasingly, less standard and more proprietary offerings-those that are wholly dependent on resources that exist only in the PaaS environment such as Salesforce.com's Force. com-provide a PaaS-hosted development environment through which developers can build, test, and deploy their solutions.
A second component required is to deploy the application into production once it is ready to be consumed by the end users. This ability is essentially the run-time environment in which the applications are deployed. The difference between the PaaS run-time environment and that of hosted or even traditional enterprise-deployed platforms is the expectation of on-demand scalability associated with PaaS that does not exist in other incarnations. This on-demand scalability can result from deploying the environment on a generic IaaS, or from specifically building out and connecting the required development, deployment, and dynamic control plane. The latter results in the creation of a platform-specific IaaS, with the required components arranged specifically to provision and decommission resources based on the unique needs of the deployment environment.
At the top of the pyramid is general business computing. This is where many organizations-especially business organizations-find themselves; with the ability to identify a business need, but without the ability to build an application or the infrastructure upon which it runs. Instead of relying on an internal IT organization to build and/or deploy infrastructure and platforms, business stakeholders simply select an application and run it. Most organizations choose this option because the capital, operating expenses and hours required to implement standardized applications are not financially feasible, not an efficient use of IT resources, or simply beyond the capabilities of the organization.
Software as a Service (SaaS) is a cloud computing model in which pre-built applications (such as CRM, SFA, word processing, spreadsheets, and HRM) are offered to customers via a web browser or other local interface such as a mobile device application. These applications are generally customizable, though the customer need not be concerned with the underlying infrastructure or the development platform or the actual implementation.
Despite the appearance that applications deployed in a SaaS cloud architecture are merely hosted applications-for example, the ASP model of the "dot com/dot bomb" era-these business computing-focused architectures are often built on a PaaS deployed upon an IaaS. While the platform upon which the application is deployed might be hidden from the consumer, it is the source of the multi-tenant properties of SaaS and the customization capabilities offered to its consumers. The underlying platform for Salesforce.com, for example, was exposed in recent years as a separate PaaS offering called Force.com, providing customers with additional options for building custom solutions.
Applications are the only component here. Whether the application is the result of building utility computing, followed by a platform, or it is simply an application deployed on a server, this is what users interact with. Users do not care how it was built, where it resides, or the compute resources required to deliver it. They simply expect it to be available when they want it, responsive and well-performing enough to be useful, and expect it to be secure regardless of where, when, and how they access it.
As we move up through the pyramid from the building blocks of infrastructure to the pinnacle of the application, the skill and knowledge necessary to build the components decreases. This is simply because each layer can be built on top of the previous without having to fully understand the underlying layer. An organization with limited infrastructure skills can readily purchase IaaS from a vendor and build their own platform (or several) upon that infrastructure without needing the expertise to completely build the infrastructure from scratch. This has been happening for years in the managed hosting business. The organization does not have to be hardware or networking experts and therefore, as an organization, they require less technical expertise.
Ideally, an IT organization will build a set of services that fit the needs of the business organization. They can build an IaaS solution within their own data center, a PaaS (or several) on top of that and even deploy ready-made applications in a cloud context, satisfying the needs of the business in an agile way. This enables them to react quickly to the ever-changing needs of the business.
In a similar manner, business units can now deploy solutions based on their needs and level of technical competency. They can deploy SaaS solutions via an external cloud provider or rely on internally available solutions; or they can build apps upon platforms or deploy their own IaaS solution.
The cloud architecture enables organizations to deploy solutions that naturally meet at the intersection of IT and business. Given the dynamic and non-static mappings between the applications and the resources-regardless of whether the IT organization built the application from the ground-up to meet the needs of the business or the business simply deployed their own solutions-cloud architecture enables them to seamlessly integrate at the most appropriate point for the organization. The organization, therefore, is able to maintain various elements of control (for example, security and compliance) while still providing the maximum level of agility to the business.
This newer dynamic cloud architecture may bring a wealth of new benefits, but it must still satisfy the basic mandates of enterprise IT and, if the internal systems are going to mesh with external ones, it must provide for consistent and reusable methods of interconnecting the divergent implementations at the cloud providers as well. In order to achieve mass adoption within by IT, the cloud and the dynamic control plane must be an enterprise-class solution and not just theoretical or a hodge-podge of workarounds and unique implementations.
When originally discussing the concepts of Infrastructure 2.0 (now an ongoing working group looking at formalizing the requirements of cloud infrastructure), there were several key components identified as being necessary to create and maintain this dynamic control plane, a critical element to Infrastructure 2.0. Amongst these where an infrastructure that was scalable, adaptable, extensible, and manageable. However, there are additional, basic fundamental needs of enterprise IT that must be taken into account. Any component that provides such comprehensive involvement in the applications and data must also be secure and be able to operate in real-time; it cannot degrade security or impede performance.
The dynamic control plane must always be available to connect users to the appropriate resources and it must do so in real-time without impacting performance. Despite the fact that the dynamic control plane needs to mediate and account for every user session and the movement of each application connection and each data access in order to be enterprise-ready, it must do so with little to no additional latency. As was previously stated, the application and the user experience of that application is the final, ultimate goal of any IT architecture. Correctly balancing that user experience with the controls and policies required by the business is the ultimate goal of cloud architecture.
Cloud computing is not a revolution. It is an evolution that has been ongoing for well over a decade, if not since the very beginning of electronic computing. The cloud is simply an architectural model that employs many of the same components used in datacenters around the world today in a more flexible, responsive, and efficient way. The primary difference is in how these components are tied together with a dynamic control plane which helps enlighten and inform the architecture about the rapidly changing requirements of today's applications, data and clients.
The dynamic control plane must be able to Intercept traffic as it traverses the cloud, Interpret the data and Instruct the cloud architecture on how to efficiently connect the user to the appropriate application instance. However, in order to be truly ready for enterprise deployment, it must also be scalable, adaptable, extensible, manageable, and secure with real-time performance. And in order to support this dynamic environment the cloud must be built with these ideals in mind, with each component-such as IaaS, PaaS, SaaS, users, and applications-designed to work together and as part of the dynamic control plane.
There is no doubt that some concept of cloud computing will become the primary method of delivering business critical applications in the coming years. There is little doubt that a move to cloud architecture will continue to provide the tools needed to better align business and IT. About the only thing that remains to be seen is whether the vendors and manufactures can deliver an enterprise ready dynamic control plane to bring the entire picture together to provide those benefits. | <urn:uuid:3f43952a-93ba-4219-9551-57452c5f77c6> | CC-MAIN-2022-40 | https://www.f5.com/ja_jp/services/resources/white-papers/controlling-the-cloud-requirements-for-cloud-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00377.warc.gz | en | 0.939946 | 4,489 | 2.515625 | 3 |
Reading time 8min
The increase in global smartphone ownership and the ease in which people can carry out everyday tasks in the palm of their hands is impressive to say the least. From making financial transactions, making mCommerce payments, organising local and international travel to simple pleasures such as online gaming. A true success of modern technological capabilities. It’s easy to believe that with such progress in the space of a few decades, the security protocols that run unseen in the background would keep the mobile environment safe and free from cybercriminals, correct? Yes and no. Although considered safer than desktops, mobile app security threats are a very real prospect that mCommerce merchants and financial institutions need to take seriously. In this blog post we focus on understanding device spoofing for the simple reason it can provide signs of some very suspicious user behaviors that are indicative of serious criminal activities.
What is device spoofing?
Put simply, device spoofing involves taking steps to make it appear as though a user (in this case, a fraudster) resembles a regular mobile user. They will first attempt to have a user install malware through phishing and/or SMiShing attempts to gain valuable information from user's online accounts and also unique device information - this is one way of succeeding in accounts takeover (ATO). The fraudster will then imitate a user’s mobile device in an attempt to bypass fraud detection, and fool merchants in what is essentially an attempt to throw them off the scent as they carry out their fraudulent activities.
Device spoofing attempts to alter the user’s digital fingerprints - the aim of the game, as always, is for fraudsters to mask their true identities, intentions and device setups. There are many digital fingerprints that can be changed (we analyse thousands of pieces of data to uncover spoofing attempts). Some of the most common irregular behaviors include altering the appearance of using a real smartphone or tablet. Device spoofing alone is not enough to mask identity, therefore other digital fingerprints such as GPS location, timezone data, IP addresses etc. also need to be altered. But how is device spoofing performed?
Device spoofing attempts using emulators
As touched upon, steps at altering digital fingerprints can involve some fairly basic steps. An average mobile device user, for instance, may use a VPN (virtual private network), and in itself this is not deemed suspicious, but more as an individual attempting to maintain a level of privacy and security. Where things become suspicious is when a user deploys a whole batch of spoofing techniques. The individual techniques sound simple enough, but the level of detail is often surprising, and the determination to hide unique identifying details can indicate fraudulent intentions. But let's look specifically at device spoofing and how it is performed.
- Emulators - used legitimately by developers to test Android and iOS apps, who need to mimic the mobile environment for which they are building an app. This allows developers to verify apps and features are running properly while working on a desktop computer without having to buy hundreds of mobile devices in order to verify the app works without glitches. This ability to run numerous mobile environments at once caught the attention of fraudsters. Using emulators, they can spoof a device’s make and model ID, but can go into much more sophisticated settings such as changing the graphics card info, CPU processor, IMEI, unique Android and/or Apple ID and change the version of the operating system. And they can emulate hundreds, even thousands of mobile devices at a time, creating mobile emulator farms. A regular mobile user would never need to take such steps unless they had some potential fraudulent intent.
With device spoofing, fraudsters will use unique information from compromised mobile devices (as mentioned previously, obtained using phishing attacks) in order to appear natural. But this alone is not enough, as they will also attempt to mask their true GPS location [applicable to iOS] and IP addresses, especially if they are located in a country or region that is known to be a hotspot for fraud. Spoofing true location will allow fraudsters to bypass any restrictions imposed by merchants or financial institutions that have blocked certain geo locations due to being a risky point of origin for high value (or high volume) fraud attempts. The same applies to timezone data, required to be changed in order to match the location of a genuine user’s account and their device's operating system. For example, a fraudster in Europe who has performed a successful account takeover (ATO) of a user in the United States will need to match their location/timezone settings in order to appear as much like the original account holder as possible.
Why is detection of device spoofing so important?
With fraud systems becoming more attuned to the threats posed in the online environment, to successfully attack them directly would require a high-level of technical skill, knowledge and manpower (or bots). This can be very time consuming and may not succeed. Most fraudsters choose to follow the path of least resistance - aiming to bypass fraud detection systems. One of the best ways to do this is by acting as much as a normal user as possible after a successful ATO or having purchased stolen credit card details from a dark web marketplace. Of course, to prevent suspicion, fraudsters need to hide their identities and digital fingerprints, which is where device spoofing comes in. With the professionalisation of fraud tools, and the ease with which they can be found and purchased in dark web marketplaces, it has become easier than ever to perform spoofing and mask digital fingerprints.
With the use of mobile devices for global mCommerce and financial transactions set to increase, merchants and financial institutions need to take mobile app security threats seriously. Firstly, the sheer volume of transactions that are currently being performed act to mask the few bad apples that are hiding in an ocean of trustworthy users. Fraudsters see this is an opportunity to continue what they are doing with an increased chance of success, especially if individual users are not fully aware of the dangers of online fraud, nor if merchants or institutions use ineffective rules based fraud management systems that can easily be spoofed.
The very real effects of device spoofing attempts to mask the fact that a genuine user’s mCommerce or digital banking app has been compromised. This means that either ATO, identity theft or stolen credit card information is what’s truly being covered up by spoofing, and not just the identity and location of a fraudster. The negative effects of this can be repeated huge financial losses, especially if a company or institution is known among dark web fraudsters to have ineffective fraud management in place, making them a popular target. Users of such services tend to trust that mobile app security threats are minimal, but if they become a victim, they may tend to blame the company for any financial losses they suffered. Financial losses through theft, loss of customer trust, loss of custom, loss of revenue growth. There is a lot to lose if spoofing and general fraudulent activities are not effectively detected and prevented.
How to detect device spoofing attempts to prevent mobile app security threats
If you’ve read this far and are perhaps worried that the level of sophistication of mobile app fraud attacks is on the increase, then you should also feel relieved that the response to the threats exists - it is advanced and effective at preventing fraud. Understanding digital fingerprinting and device spoofing attempts are a key feature of modern fraud detection and prevention. For example, Nethone’s advanced solution analyses 5,000+ digital fingerprints automatically, passively and in real-time. The advanced capabilities are powered by artificial intelligence and machine learning (ML) models that can identify spoofing attempts; the fraud solution goes further by analysing behavioral biometrics to understand how the user is interacting with their device and the app service. Analysing digital fingerprints and behaviors together paints an overall picture that can weed out fraudsters with a high-level of certainty that their intentions are indicative of fraud. The good news for eCommerce and mCommerce merchants and mobile banking providers is that all this analysis is performed completely unnoticed by regular users of a service, without any negative effects on the customer experience. All the signs for detection and prevention of spoofing are clearly visible, but only artificial intelligence and ML models powered fraud solutions are fully effective at pointing you in the right direction.
If you wish to detect and protect your business by understanding device spoofing and prevent the risk of mobile app security threats impacting your revenue growth, we're here to help with the perfect solution. | <urn:uuid:ff824817-959a-4337-81ef-ce217b6aeef4> | CC-MAIN-2022-40 | https://nethone.com/post/device-spoofing-sign-of-serious-mobile-app-security-threats | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00377.warc.gz | en | 0.947037 | 1,735 | 2.53125 | 3 |
Account Hijacking is where a hacker compromises a computer account that does not belong to them. Often these account hijackings are email accounts because they contain so much rich and valuable data. Then the hacker may use the compromised account to impersonate the account owner and breach additional accounts of people the Account Owner knows and who trust email from the Account Owner when its received by the unsuspecting recipient.
Generally speaking, account hijacking is done through phishing and social engineering attacks where a hacker sends a spoofed email message to a target and convinces them to log into a fake website which steals their account credentials. Other methods of account hijacking may include using a password guessing tool or simply purchasing exposed credentials on the dark web from previous successful website hacks such as those at Yahoo, Linked In, and Drop Box.
Oftentimes emails are linked to the user’s online identities at sites including social media accounts and financial accounts. Hackers can use the compromised account to steal the user’s personal information, perform financial transactions, create new accounts, ask the account owner’s contacts for money or help with an illegal activity.
None of these outcomes are what a user imagines when signing up for services online, it is always important to be aware of the cyber threats we face everyday.
What should you do as an SMB?
These Account Hijacking attacks are generally done through phishing attacks, the most common way hackers gain access to your accounts. These attacks make it easy for hackers, as victims essentially hand over their sensitive information to the hackers, or allow them into their network when employees click on a malicious attachment. The number one way to defend against phishing attacks is through cybersecurity awareness training. Below we have created a list of what can be done to defend against phishing attacks.
- Train your employees on how to spot, avoid, and delete phishing attacks.
- Test your employees with CyberHoot’s Phish Testing attacks; re-train those that fail your tests.
- Purchase and train your employees on how to use a Password Manager. If you visit a phishing website and try to enter your password credentials using a Password Manager, you will NOT be able to.
- To protect the Internet from phishing attacks using your domain name, setup SPF, DKIM and DMARC records to block the receipt of emails masquerading as users sending phishing attacks under your domain name. | <urn:uuid:28a09d09-8cf5-4537-8a44-04ccecba7ad3> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/account-hijacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00577.warc.gz | en | 0.931764 | 507 | 2.734375 | 3 |
Intellectual Property (IP) refers to the ownership of a specific idea, design, manuscript, etc. by the person or company who created it. Intellectual property may give the person or company exclusive rights to it, forcing others to only be able to use that property with the creators permission. These rights are typically enforceable when the owner has successful applied for and received a patent, design rights, or trademark on their IP. It is important to know that copyrights are inherent within the new ideas and design work of most individuals and businesses without having to apply for them. Just be sure to keep careful records on when your IP was created and copyrighted to be able to enforce your rights in the future.
Related Term: Critical Information
Source: Oxford Dictionary
To learn more about Intellectual Property, watch this short video:
CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like:
- Cybrary (Cyber Library)
- Press Releases
- Instructional Videos (HowTo) – very helpful for our SuperUsers!
Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. Sign up for the monthly newsletter to help CyberHoot with their mission of making the world ‘More Aware and More Secure!’ | <urn:uuid:94bdfd77-1efd-4760-9333-bf219c5cf667> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/intellectual-property/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00577.warc.gz | en | 0.934019 | 319 | 2.640625 | 3 |
With increasing concerns about data protection and privacy, there has been a lot of talk about the importance of enabling people to own their own data.
What does this mean?
Let’s take Facebook as an example. A user will register with the platform and fill out some basic information about themselves. After that they will likely start adding friends, posting updates, uploading photos, and so on. The problem here is that Facebook essentially owns your data, as in, your credentials, private conversations, likes, dislikes and posts, are all stored on their servers. What they do with that data, we don’t really know.
Now, new technologies and initiatives are emerging that are enabling users to keep their data separate from the applications that use them. Tim Berners-Lee, the inventor of the World Wide Web, is currently developing a project called Solid (Social Linked Data). The objective is to give each user their own “pod”, which is where their data is located. The pod has its own ID, and the owner can store their pod wherever they like. Using their “web ID”, the user can login to supporting platforms, and grant or revoke permission to their data, as and when they choose.
Another technology gaining attention is digi.me. Digi.me is a personal data aggregation platform that enables Private Sharing of personal data by users who are given an app to download and personally hold their encrypted data securely. Organizations can use the platform to get better data about their users, obtained ethically and with full consent and complete privacy, which provides more value to consumers and brands. Digi.me never sees, holds or touches any user data.
If businesses still require access to sensitive data, you might ask, what’s to stop them from using this data inappropriately? In the same way that you can’t un-see or unhear something, you can’t take back your data after you have shared it with someone. This is a valid point; however, businesses will only be granted access to the data they need. For example, they don’t need access to your login credentials, nor do they need access to your private chat messages.
In many ways, businesses benefit the most from enabling users to own their data, as it’s one less thing for them to worry about. They no longer need to host, maintain or secure a database full of personal data. This will save them time and money in terms of minimizing data breaches and complying with regulations. Additionally, if they are not storing large amounts of personal data, there will much less incentive for hackers to target them.
This is great, however, there are more things we need to consider. Firstly, businesses will still need an data access governance solution to prevent employees gaining unauthorized access to sensitive data, regardless of who owns it, or where it is located. And if they are allowing employees to access sensitive data, they will still need to enforce a strong password policy. They will still need to analyze user behavior, including who is accessing what data, and when, and pay close attention to who is copying information from one place to another.
Technologies, such as those listed above, will no doubt play an important role in the future of data security, but shouldn’t be seen a magic bullet. As with blockchain technologies, they can’t prevent organization’s violating privacy laws. They can’t protect organizations from ransomware attacks. They can’t prevent a disgruntled employee from leaking confidential documents, which they have legitimate access to. Data is data. Once it has left the hands of the owner, businesses will still need to secure the data in much the same way. | <urn:uuid:4d3a1b46-66cb-4395-8ff7-cd9ef33b653a> | CC-MAIN-2022-40 | https://www.lepide.com/blog/enabling-people-to-own-their-data-doesnt-mitigate-privacy-concerns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00577.warc.gz | en | 0.958107 | 764 | 3.03125 | 3 |
The Red planet Mars approaching more closer to earth, it occurs only for fifteen years
As distance between earth and mars would be approximately 35.8 million miles on upcoming Tuesday. On Friday the Sun and Mars are positioned in same exact path on opposite direction of earth resembles to the lunar eclipse event. The red planet is even more brighter and bigger it’s clear for watching.
The good news about all the dust of Red planet Mars is that it reflects sunlight, which makes for an even brighter red planet, said Widener University astronomer Harry Augensen.
“It’s magnificent. It’s as bright as an airplane landing light,” Augensen said. “Not quite as bright as Venus, but still because of the reddish, orange-ish-red color, you really can’t miss it in the sky.”
Mars atmosphere is mostly carbon dioxide
The Red planet Mars atmosphere is mostly carbon dioxide, and there is very little of it anyway, the pressure is less than 1 per cent of air pressure here. Temperatures on the ground range from 30 degrees celsius to -123 below. A day there is 24 hours, 39 minutes and 35 seconds long and a year is 687 Earth days.
In 2003, Mars and Earth were the closest in nearly 60,000 years 34.6 million miles (55.7 million kilometers). NASA said that won’t happen again until 2287. The next close approach, meanwhile, in 2020, will be 38.6 million miles (62 million kilometers), according to NASA.
Observatories across the U.S. are hosting Mars-viewing events next week. Los Angeles’ Griffith Observatory will provide a live online view of Mars early Tuesday.
The event on Friday will be visible in Australia, Africa, Asia, Europe and South America. A total lunar eclipse occurs when the sun, Earth and moon line up perfectly, casting Earth’s shadow on the moon. Friday’s will be long, lasting 1 hour and 43 minutes. | <urn:uuid:044308a1-ad92-4341-b2f6-220ae08f1a98> | CC-MAIN-2022-40 | https://areflect.com/2018/07/31/latest-space-news-after-15-years-mars-much-more-closer-to-earth-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00577.warc.gz | en | 0.908755 | 420 | 3.75 | 4 |
Freak floods caused by downpours can cause devastating effects, but what’s that got to do with big data? Much like constant downfalls of water, there’s lots of data being generated — and it needs to go somewhere.
Just as a huge volume of water won’t be managed effectively without the correct infrastructure in place, neither will huge volumes of data. Yet ironically, you can use big data to prepare for flooding and other environmental or operational risks. Operational risks refer to any kind of disruption to normal processes that could lead to a loss of customers and revenue. While you can’t anticipate every possible eventuality — like a freak downpour — you can focus on getting the level of risk in your facility down to a tolerable level.
Accidents, in their broader sense, are rare events that occur when a series of failures of risk management barriers occur. However, post-incident investigations often show that several near misses actually occurred before the accident took place. While these failures may not have been observable to the human eye, they could have been detected through rigorous data collection and analysis.
So, what exactly is ‘big data’? Given the buzzword’s widespread use across industries, it is not surprising that the exact meaning of the term is often subjective. As per the IBM Institute for Business Value, it really all comes down to four Vs: volume, variety, velocity and veracity.
Big data is, unsurprisingly, big. As the name suggests, big data relies on massive datasets, with volumes such as petabytes and zetabytes commonly referenced. However, these large datasets aren’t as difficult to collect as you might imagine.
New technologies are increasing the size of data sets that every device, facility and process generates. It’s growing at an exponential rate and bringing with it new challenges. Factories, as an example, are getting overloaded with data. Every machine, process and system on the factory floor will be generating data during the plant’s operation. However, these facilities have commonly become data rich, but information poor.
For example, technologies allow plant managers to extract data on the condition of mechanical equipment, such as a motor. However, tracking huge reams of data on the condition of a motor will only go so far. You need to use data, for it to be useful. What’s the solution? Keep your data collection streamlined by investing in a data management system. Here, you gain insight into the data you care about and use it in accordance with your risk management plan.
The second ‘V’ in the big data flood is velocity. Velocity refers to the accelerating speed at which data is being generated and the lag time between when data is generated and when it is accessible for decision making. Faster analysis leads to faster responses. However, today’s data is created at such a rate that it exceeds the capability of many existing systems.
Consider the motor condition monitoring as an example — you may be tracking 500 vibration data points per second to check its performance, but if your vibration analysis system is only able to analyze 200 data points per second, you have a problem. Ultimately, you need an entire big data infrastructure that is capable of processing this data quickly.
As more technologies begin to generate data, this information is becoming more diverse. From vibration analysis and condition monitoring, to data from enterprise systems such as market trends and product lifecycle management (PLM), organizations are finding they need to integrate increasingly complex data types from an array of systems.
This often requires the vertical integration of several different systems. Using this more complicated integration model, condition monitoring data could identify when an industrial part was showing signs of failure, then automatically cross check inventory data to see if a replacement part is in stock. If a replacement part isn’t available, this system could make even more intelligent decisions by automatically repurchasing the part using an enterprise resource planning (ERP) system. However, this is just one example.
Plant managers may find themselves wanting to collect all data types, even data that isn’t useful to them and store it in archives for historical analysis. Businesses need to keep their risk mitigation goals at the forefront of their data collection, rather than collecting data for data’s sake.
The final ‘V’ refers to Veracity, the reliability of a particular data type. The key issue here is that the other dimensions of big data — that’s volume, velocity and variety — challenge the capacity of many existing systems.
Consider the replacement part example. The scenario sounds ideal, but how can you integrate the condition monitoring big data of a legacy motor, with the availability data of the parts supply if it needs replacing, when this data belongs to a third party?
We suggest forming a relationship with an obsolete parts supplier, such as EU Automation. To achieve true veracity, your data infrastructure must be able to make intelligent decisions, without hitting a wall when it requires action outside of the factory walls.
Don’t let your huge reams of data flood your facility. There’s no value in generating data for data’s sake. For the most effective analysis, make sure you have all four dimensions of big data being in place — volume, variety, velocity and veracity. | <urn:uuid:b413ddd5-4294-4435-a3ba-847e0ff4c21c> | CC-MAIN-2022-40 | https://www.mbtmag.com/home/blog/21102337/managing-the-big-data-flood | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00577.warc.gz | en | 0.938662 | 1,106 | 3.015625 | 3 |
You may be asking yourself: What is a SATA HDD?
There are a wide variety of computer science related acronyms. SDD, HDD, SATA, and PATA stand for solid state drive, hard disk drive, serial advanced technology attachment, and parallel advanced technology attachment, respectively.
SATA stands for Serial Advanced Technology Attachment, and it is an interface that connects host bus adapters to storage devices like optical drives, solid state drives, and hard disk drives. In short, SATA enables your computer to read and write data to storage devices. SATA hard drives can be found in servers, desktop computers, laptops, and gaming consoles such as the Xbox 360 and One and the PlayStation 3 and 4.
Solid State SATA vs Hard Disk Drive SATA
A SATA HDD has a typical lifespan of around 3 to 4 years. On the other hand, a SATA SSD has a projected lifespan of around 10 years. This is not surprising, as solid state drives are known for being far more durable than hard disk drives (albeit solid state drives are more expensive). Solid state SATA drives also boot up faster than HDD SATA drives.
SATA drives are less likely to experience catastrophic failure than other storage mediums. SATA drives are far more likely to undergo a gradual decline of hard drive performance, giving the user enough time to move their files over to a new storage device.
Seeing as SATA-enabled hard drives have been around for over 20 years, they are supported by nearly all operating systems and computer motherboards.
Do you Need SATA Hard Drive Data Recovery?
History of SATA Drives
SATA was announced in the year 2000, as an improvement and alternative to PATA (parallel advanced technology attachment). SATA is a superior technology that has largely replaced parallel ATA. In 2008, 99% of the desktop PC market used SATA interfaces.
One of the major upgrades from PATA to SATA is that SATA is capable of hot-swapping. Hot swapping is the concept of adding or removing a hard drive from a system without shutting the system down.
SATA revision 1.0 Serial ATA-150
The first reversion of SATA was released in 2003, and communicates at a rate of 150 MB/s.
SATA revision 2.0 Serial ATA-300
The second generation of SATA was released in 2004, and communicates at a rate of 300 MB/s.
SATA revision 3.0 Serial ATA-600
The third generation of SATA was released in 2008, and communicates at a rate of 600 MB/s.
How to Handle SATA Failure
If you experience catastrophic failure on a SATA-enabled HDD or SSD, you likely require professional assistance to retrieve your data. Gillware operates an advanced data recovery lab that has been helping businesses and individuals recover data since 2004. Gillware has the equipment, experience, and expertise to recover your data from SATA HDD or SATA SSD storage devices. | <urn:uuid:de89b303-1bec-4e32-8df3-6c4db3368fec> | CC-MAIN-2022-40 | https://www.gillware.com/hard-drive-data-recovery/what-is-a-sata-hard-drive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00577.warc.gz | en | 0.944382 | 592 | 3.09375 | 3 |
It’s Customer Service Week, a week highlighting the importance of providing first class customer service and celebrating the people who make that service possible for organizations every day. In honor of the occasion, we’re taking a look back in time to explore the history of the industry we call “customer service” today — where it started, how it’s evolved, and the way it has always depended on new technology.
Before communications technology allowed businesses to serve customers remotely, the only way to provide customer service was in person. If a customer had an issue they needed to address, such as a defective product or questions about how it worked, they needed to return to the place where they made the purchase and discuss the matter face-to-face. As mass production began to take off in the Industrial Age and technology revolutionized the United States, more and more customers needed assistance with their products — and they often didn’t live near the factories in which the products were made.
Companies needed a way to provide large-scale, remote service to their growing customer bases. Luckily, another new technology was about to make that possible.
A Major Change: The Telephone
The invention of the telephone in 1876 was an essential first step in expanding the possibilities of customer service. At first, telephones were rare technology and had limited capabilities, but the invention of the telephone switchboard in 1894 allowed people to make calls across the country. Phones provided a way for customers to contact companies from a distance to get product information, repair walkthroughs, and more.
The telephone remained the standard for customer service interactions for decades. As the economic boom of the 1920s caused an influx of consumers and more demand than ever before, retail organizations began creating and expanding customer service departments to keep up with the flow of customers. That major growth soon came to a halt as the Great Depression slowed progress — consumers stopped purchasing and markets crashed.
But in the wake of tragedy, more innovations emerged to expand customer service capabilities, including the Private Automatic Branch Exchange (PABX) technology of the 60s. PABX paved the way for modern-day call centers by allowing call routing to various agents, as well as other innovations such as conference calling. Shortly after, AT&T rolled out toll-free telephone numbers, which patched customers through to a company directly without needing an operator or a collect call.
In the 1970s, Interactive Voice Response (IVR) technology appeared on the scene. IVR allowed organizations to automate portions of their customer service process by playing pre-recorded messages, recording customer responses, and even moving customers through trees of information based on their replies. Using these technologies, customer service workers were now able to assist more customers than ever before.
The Online Age of Customer Service
The internet changed the world forever in the 1980s and 1990s as companies shifted to digital as a new way to improve their customer service. The invention of Customer Relationship Management (CRM) software in the 1980s made it much easier to keep track of contacts and customers in a centralized database.
Moving into the 21st century, email grew as another support channel, providing customers with new ways to contact organizations and receive faster responses. Chat followed soon after, putting customers in direct conversation with service agents for real-time assistance without needing a telephone.
As people grew more comfortable using digital channels, social media websites began to dominate the internet, providing yet another way for customers to seek support from organizations. Sites like Yelp, Facebook, Twitter, and others became platforms for soliciting assistance from brands — often in a very public way. Though this visibility proves damaging for some companies, others embrace the trend, using it as a way to build stronger relationships with customers and provide even faster and more effective support.
Looking to the Future
As technology continues to advance, companies are automating more and more of their customer service processes to help their teams handle the steady flow of tickets and questions — whether that’s using AI to handle phone calls or chat, or adopting dedicated customer service software to organize and manage support requests. This reliance on tech remains as essential as ever while companies race to keep up with the demands of their fast-growing, global customer bases and compete to provide the fastest, highest-quality customer service experience possible.
But even with the assistance of technology, a company’s customer service is only as good as the humans behind it. Even as automation grows and AI capabilities expand, many consumers will always prefer to speak to a real agent on the other line — or the other side of the screen. We thank those who work in the industry for their care and support in providing assistance to customers who need it every day, and we wish you a happy Customer Service Week!
About Ashlyn Frassinelli | <urn:uuid:7dea7091-ccf3-4f9d-a79b-53c92cbbcf0a> | CC-MAIN-2022-40 | https://www.issuetrak.com/blog/how-tech-shaped-customer-service | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00577.warc.gz | en | 0.960816 | 978 | 2.984375 | 3 |
Human error is the weak link in the firm’s cyber security defense mechanism. But, how can you deal with such errors and can online cyber security training courses reduce employee errors? Let’s find out in this blog.
We all make mistakes, even those sitting at the helm of affairs. But, a small mistake can result in a million-dollar loss for a business. Don’t believe us; look at the finding of the IBM report, according to which the average cost of human errors in cyber security breaches was $3.3 million in 2020. This data clearly shows the need to train employees within the firm to prevent cyber security breaches resulting from human error.
Another study by IBM shows that human error is the main cause of 95 % of cyber security breaches. If we remove the error by humans completely, 19 out of 20 cyber breaches will be eliminated.
So, the question is why human errors cause breaches and why employees need cyber security training to improve cyber security behavior within the organization.
What is the Role of Human Error in Cyber Security?
From a cyber security point of view, unintentional actions by employees and even users that result in a security breach can be termed human error. And, by unintentional actions, we mean downloading a malware-infected attachment to using a weak password. To make matters worse, end-users also deal with the constant threat from hackers because of their weak decision-making. That’s why it’s essential to train employees through top-rated online cyber security courses. In the course, employees will learn about safe cyber security practices and how to deal with bad actors.
Types of Human Errors and Real Examples of Human Error in Business
Human errors are of different types, but we can broadly categorize them into two different types: skill-based and decision-based errors. The difference between the two lies in whether or not the person at the helm of affairs knows how to perform the right action.
Skill Based Errors
It consists of slips and lapses that occur when performing familiar activities. In such errors, the end-user knows what action to take but fails to do so due to a temporary lapse, mistake or even negligence. The reasons for skill-based errors include distraction or a small lapse of memory.
Taking the wrong decisions based on a decision-based error. There are various reasons for decision-based errors, such as incomplete knowledge, incomplete information about a situation, or wrong decision-making due to inaction.
To reduce human errors, businesses can rely on cyber security training courses. School.infosce4tc provides a wide range of cyber security courses, including training on real projects to help employees learn the crucial skills to reduce cyber-attacks due to human errors.
Common Examples of Cyber Security Breaches due to Human Error
Email Misdelivery Can be a Major Cause
Did you know in 2018, email misdelivery was the 5th most common cause of cyber security breaches? And continues to be a major cyber security threat even today. Wrong emails lead to data loss and even loss of reputation for a business. An example of an email misdelivery threat was when NHS practice revealed the email address of 800 patients who visited the HIV clinics. The error occurred because the employee sent an email notification to HIV patients where they accidentally entered the patient’s email address in the To field instead of the BCC field.
Not Following Proper Password Hygiene
Passwords are the first line of defense but can become the biggest cause of cyber-attack. A recent study showed 61 % of breaches are caused by stolen user credentials. Here are a few reasons why passwords are the most common human error in cyber security breaches.
- Most people use common passwords like 123456 or their name
- 45 % use the same passwords for different platforms
- Sharing passwords
- Using the same password for a long time
Fill in the Incomplete Patches
Cyberattacks are mostly caused by system vulnerabilities. Cybercriminals use loopholes to access the network and data. Software developers fix the issue and send the patch to all users whenever such attacks are discovered. A patch must be created to prevent further attacks immediately. A small delay will help cyber-attacks to compromise systems and steal data.
A real example is of the Equifax attack in 2017, wherein the company failed to create a patch for a software security vulnerability. As a result, hackers gained access to the personal information of 140+ million Americans and 8,000 Canadians.
Poor Access Control
Poor access control allows bad actors to create havoc in the enterprise network. Cyber-attacks have increased over the past few years; hence, it’s essential to provide cyber security online training to employees. Through the course, employees will learn to mitigate and prevent cyber-attacks. Also, employees must follow the least privilege principle where users will have limited access. LLP minimized the chances of a breach.
How to Reduce Cyber Attacks Caused by Human Error?
- Develop a user trust approach to cyber security
- Use secure gateways and implement software-defined parameters
- Monitor every online activity
- Use two-factor authentication and biometric security
- Implement machine intelligent security solutions to alert users of potential threats
- Impart training to employees through cyber security courses.
Cyber Security Courses to Prevent Cyber Attacks Caused Due to Human Error
- Ethical Hacking and Penetration Testing Course
- Cyber Security workshop
- Cyber Security Bundle
- CompTIA Security+
Enroll in our IT cyber security certification training bundle to learn everything about attacks and ways to prevent them. These courses will give you an upper hand over others who only have bookish knowledge and fail to implement it to prevent threats in the real business environment. | <urn:uuid:4e4b8fbf-e93e-434d-8420-2b2e25ebe233> | CC-MAIN-2022-40 | https://www.infosec4tc.com/2022/04/27/is-human-error-the-number-1-cyber-security-threat-for-business-in-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00577.warc.gz | en | 0.920206 | 1,178 | 3.0625 | 3 |
Unsolicited phone calls from unknown numbers are not only annoying; they can be extremely dangerous and malicious too. Automated scam calls are even worse. Known as robocall scams, telephone fraudsters are always looking for ways to steal your hard-earned money.
If you’re not careful, these calls can turn your whole financial world upside-down and ruin your life. That’s why it is extremely important to stay vigilant against robocall scams and know your options for fighting back.
Read on and I’ll tell you about the most popular robocall scams out there and one essential trick you can do to reveal who you’re up against.
What are robocalls?
According to the Federal Communications Commission (FCC), robocalls and telemarketing calls are consistently the top sources of consumer complaints. It estimates that in 2016 alone, U.S. consumers received nearly 2.4 billion robocalls per month.
Robocalls are unsolicited prerecorded telemarketing calls to landline home telephones, and all auto-dialed or prerecorded calls or text messages to wireless numbers, emergency numbers, and patient rooms at healthcare facilities.
FCC rules limit many types of robocalls, though some calls are allowed if prior consent is given. Rules differ between landlines and wireless phones. To make matters worse, some robocalls are actually scams.
Types of robocalls
There are infinite variations of robocalls out there. From scammers pretending to be government officials or collection agents to fake charities, robocall fraudsters never run out of ideas to dupe you. As a general rule, you must always be suspicious of any call and prerecorded message you receive from unknown numbers.
Impersonation phone scams
The most common types of phone scams are prerecorded messages from people that pretend to be representing government agencies like the DHS, IRS or the FBI. These messages can sound authentic and are always extremely threatening.
For example, you may receive a message from the IRS stating that you owe them money due to tax fraud and there is a warrant out for your arrest. The scammers will then demand payment and if the payment is not received, they claim that you will go to jail.
Threatening and scary as they are, please don’t fall for these impersonation scams. Government agencies like the IRS, DHS and the FBI will never call you to initiate contact, demand money nor threaten you with arrest. It’s such a problem that these types of calls have resulted in a more than $36 million in taxpayer loss.
Less dangerous types of robocalls are telemarketing and sales calls. Granted, sales calls are still annoying and they can flood your phone and voice mailbox with junk, but here’s a fact that most people don’t know about – phone messages that are trying to sell you something are against the law unless you have granted the company written permission to call you.
However, there’s a new sales call scheme that can scam you out of your money. It’s called the “Can You Hear Me?” scam and in this scenario, fraudsters are calling victims hoping to get them to say the word “yes” during the conversation that’s being recorded. The fraudster will later use the recording of the victim saying yes to authorize unwanted charges on the victim’s utility or credit card account.
Protect yourself from robocalls
This is actually the easiest solution to eliminating robocalls. If you receive a threatening message or call from an unknown number, hang up immediately. Never say “yes” or enter any numbers that the scammers are asking you to press. These may only lead to more robocalls coming your way.
If you receive a scam call, write down the number and file a complaint with the FCC so it can help identify and take appropriate action to help consumers targeted by illegal callers.
Use BeenVerified’s reverse phone lookup
If you get a missed call and you’re not sure, don’t call back! You can use BeenVerified’s reverse phone lookup to uncover the name of the owner and where the number is from. Additionally, using reverse phone lookups instead of calling back will also prevent robocallers from knowing that your number is active.
Unlike other reverse phone lookup services, BeenVerified’s reverse phone lookup can provide you with in-depth detail about the phone owner’s identity including address, location, service provider and if the number is a known nuisance caller.
If available, BeenVerified can even provide you with significant information like related phone numbers, email addresses, work history and associated social media accounts and websites about the owner.
This is important since you’ll have a better understanding of who you’re up against if you ever need to take legal action against these robocall scammers.
BeenVerified is an online service that promises to “make public records easy, affordable, and accurate for everyone.” With the help of its data partners, BeenVerified sweeps through several databases of publicly available information including data from social media sites to track down anyone you’re searching for. | <urn:uuid:1065afd9-7335-4745-8271-ae0299f62d3b> | CC-MAIN-2022-40 | https://www.komando.com/privacy/sponsored-reverse-phone-lookups-to-avoid-clever-scams/428207/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00577.warc.gz | en | 0.926675 | 1,108 | 2.515625 | 3 |
We love sweet treats. But too much sugar in our diets can lead to weight gain and obesity, Type 2 diabetes and dental decay.
We know we shouldn’t be eating candy, ice cream, cookies, cakes and drinking sugary sodas, but sometimes they are so hard to resist.
It’s as if our brain is hardwired to want these foods.
As a neuroscientist my research centres on how modern day “obesogenic,” or obesity-promoting, diets change the brain.
I want to understand how what we eat alters our behaviour and whether brain changes can be mitigated by other lifestyle factors.
Your body runs on sugar — glucose to be precise. Glucose comes from the Greek word glukos which means sweet.
Glucose fuels the cells that make up our body — including brain cells (neurons).
Dopamine “hits” from eating sugar
On an evolutionary basis, our primitive ancestors were scavengers.
Sugary foods are excellent sources of energy, so we have evolved to find sweet foods particularly pleasurable.
Foods with unpleasant, bitter and sour tastes can be unripe, poisonous or rotting — causing sickness.
So to maximize our survival as a species, we have an innate brain system that makes us like sweet foods since they’re a great source of energy to fuel our bodies.
When we eat sweet foods the brain’s reward system — called the mesolimbic dopamine system — gets activated.
Dopamine is a brain chemical released by neurons and can signal that an event was positive.
When the reward system fires, it reinforces behaviours — making it more likely for us to carry out these actions again.
Dopamine “hits” from eating sugar promote rapid learning to preferentially find more of these foods.
Our environment today is abundant with sweet, energy rich foods. We no longer have to forage for these special sugary foods — they are available everywhere.
Unfortunately, our brain is still functionally very similar to our ancestors, and it really likes sugar. So what happens in the brain when we excessively consume sugar?
Can sugar rewire the brain?
The brain continuously remodels and rewires itself through a process called neuroplasticity. This rewiring can happen in the reward system.
Repeated activation of the reward pathway by drugs or by eating lots of sugary foods causes the brain to adapt to frequent stimulation, leading to a sort of tolerance.
In the case of sweet foods, this means we need to eat more to get the same rewarding feeling — a classic feature of addiction.
Food addiction is a controversial subject among scientists and clinicians.
While it is true that you can become physically dependent on certain drugs, it is debated whether you can be addicted to food when you need it for basic survival.
The brain wants sugar, then more sugar
Regardless of our need for food to power our bodies, many people experience food cravings, particularly when stressed, hungry or just faced with an alluring display of cakes in a coffee shop.
To resist cravings, we need to inhibit our natural response to indulge in these tasty foods. A network of inhibitory neurons is critical for controlling behaviour.
These neurons are concentrated in the prefrontal cortex — a key area of the brain involved in decision-making, impulse control and delaying gratification.
Inhibitory neurons are like the brain’s brakes and release the chemical GABA. Research in rats has shown that eating high-sugar diets can alter the inhibitory neurons.
The sugar-fed rats were also less able to control their behaviour and make decisions.
Importantly, this shows that what we eat can influence our ability to resist temptations and may underlie why diet changes are so difficult for people.
A recent study asked people to rate how much they wanted to eat high-calorie snack foods when they were feeling hungry versus when they had recently eaten.
The people who regularly ate a high-fat, high-sugar diet rated their cravings for snack foods higher even when they weren’t hungry.
This suggests that regularly eating high-sugar foods could amplify cravings — creating a vicious circle of wanting more and more of these foods.
Sugar can disrupt memory formation
Another brain area affected by high sugar diets is the hippocampus — a key memory centre.
Research shows that rats eating high-sugar diets were less able to remember whether they had previously seen objects in specific locations before.
Regularly eating high-sugar foods can amplify cravings. The image is in the public domain.
The sugar-induced changes in the hippocampus were both a reduction of newborn neurons, which are vital for encoding memories, and an increase in chemicals linked to inflammation.
How to protect your brain from sugar?
The World Health Organization advises that we limit our intake of added sugars to five per cent of our daily calorie intake, which is 25g (six teaspoons).
Considering the average Canadian adult consumes 85g (20 teaspoons) of sugar per day, this is a big diet change for many.
Importantly, the brain’s neuroplasticity capabilities allow it to reset to an extent following cutting down on dietary sugar, and physical exercise can augment this process.
Foods rich in omaga-3 fats (found in fish oil, nuts and seeds) are also neuroprotective and can boost brain chemicals needed to form new neurons.
While it’s not easy to break habits like always eating dessert or making your coffee a double-double, your brain will thank you for making positive steps.
The first step is often the hardest. These diet changes can often get easier along the way.
Funding: Amy Reichelt receives funding from the Australian Research Council and Canada First Research Excellence Fund (BrainsCAN, Western University).
Effects of Sugars on “Homeostatic” Neural Systems
The homeostasic system, which regulates feeding patterns based on energy need, is composed of two antagonistic pathways. The orexigenic pathway includes neuropeptide Y (NPY) and agouti-related protein (AgRP), which are known to stimulate food intake and are produced in the arcuate nucleus (ARC) of the hypothalamus, a critical region involved in homeostatic energy balance . In contrast, the anorexigenic pathway, including proopiomelanocortin (POMC) neurons produced in the ARC, has the opposite effect by inhibiting food intake .
Recent evidence suggests that sugar intake differentially affects these two opposing pathways. After a sucrose preload, mice consumed more chow and this behavioral change was accompanied by variations in NPY and AgRP. Immediately following the preload, mice showed reduced expression of NPY and AgRP in the ARC.
However, 30-60 minutes after the sucrose preload and right before the chow meal, mice showed a marked increase in both . This suggests that sucrose consumption led to a temporary decrease in orexigenic peptides followed by activation of the orexigenic pathway, potentiating caloric consumption.
In another recent study, mice maintained on a high fat diet and given limited access to sucrose-sweetened water (SSW) showed a down-regulation of POMC mRNA expression in the hypothalamus. In addition, these mice consumed greater amounts of the high fat diet on days that the SSW was available, suggesting that this reduction in satiety signaling may have facilitated hyperphagia in this group .
Chronic limited consumption of a high sucrose diet has also been shown to lead to decreased activity of the anorexigenic oxytocin system in the hypothalamus, which has been associated with satiety and meal termination .
Recent data indicate that the type of sugar ingested plays an important role in satiety. One animal study comparing the effects of 24 h access to sucrose, glucose, fructose, or high-fructose corn syrup found that glucose led to a marked upregulation of the satiety-inducing hormone, cholecystokinin (CCK), within the hypothalamus, while fructose resulted in a downregulation of this peptide .
This suggests that, relative to fructose, glucose may be more effective in eliciting satiety. This is in line with animal research showing central administration of glucose to inhibit food intake and fructose to stimulate feeding . Further, in humans, fructose ingestion to lead to lower levels of serum glucose, insulin, and glucagon-like polypeptide 1 (GLP-1), a hormone associated with increased satiety relative to glucose ingestion .
Effects of Sugar on “Hedonic” Neural Systems
Given that sweet foods and beverages are generally considered pleasurable, the effects of caloric sweeteners on brain mechanisms associated with processing reward, such as the mesolimbic dopamine (DA) system and opioid systems, have been an area of intense research in recent years. One such study observed decreased striatal DA concentrations following prolonged access to a sucrose solution in high-sucrose drinking rats , a finding also reported by this group in response to chronic exposure to ethanol .
Expression of tyrosine hydroxylase (TH), an enzyme involved in DA synthesis, was also decreased in the striatum of high sucrose-drinking rats. Acute increases in DA release upon consumption of palatable food may, as the authors posit, initiate a negative feedback cycle, inhibiting DA synthesis, ultimately leading to both reduced TH expression and striatal DA concentrations. It is important to note that while reduced DA content may reflect neuroadaptations due to prolonged sucrose consumption, reduced DA has been observed in the nucleus accumbens (NAc), a brain region associated with reward, of rats prone to obesity even prior to excessive weight gain .
Thus, it is also possible that reduced striatal DA concentrations may have predisposed the animals to excessive sucrose consumption, especially as only high-drinking rats were studied. Finally, in this study, high sucrose-drinking rats showed increased prolactin expression. Given the role of DA in inhibiting prolactin, reduced DA concentrations may have led to elevated prolactin.
Two recent studies have also explored the acute effects of sugar consumption on DA levels within the two subregions of the NAc, the shell and the core, given differential efferent projections from these regions. Rewarding substances such as drugs of abuse are known to elevate DA within the NAc shell and this response is thought to facilitate strong associations between the reward and related cues .
Using fast-scan cyclic voltammetry in food restricted rats, Cacciapaglia, Saddoris found that sucrose-related cues lead to increased DA levels in both subregions of the NAc, however, DA levels were greater and sustained for longer in the shell. Increased DA levels were also observed in the NAc shell, but not the core, after lever pressing for sucrose. Together, these experiments implicate DA within the NAc shell, versus the core, in sucrose reward.
Using microdialysis techniques in food-restricted animals, it has also been shown that while novel exposure to sucrose increases DA levels in the NAc shell, this effect wanes with repeated exposure, in contrast to what is seen with drugs of abuse .
Notably, rats trained to respond for sucrose did not show habituation of increased DA levels in the shell. This group also noted elevated DA levels in the shell, but not core, when animals responded for sucrose as well as in response sucrose-related cues during extinction. Interestingly, however, elevated DA levels were observed in both the shell and core regions when sucrose was delivered without the requirement of responding (“response non-contingent” sucrose feeding).
Given the established role of opioid signaling in hedonic processes , recent studies have also explored opioid involvement in the rewarding aspects of sugar consumption. Interestingly, Ostlund, Kosheleff found no differences in sucrose intake during acquisition testing between mu-opioid receptor knockout (MOR KO) and control mice.
However, MOR KO mice showed fewer average bursts of sucrose licking when food deprived and attenuated licking behavior when sucrose concentrations were increased, indicating reduced sensitivity to these manipulations.
In a separate experiment, MOR KO mice displayed attenuated licking behavior in response to sucralose (a non-caloric sweetener) but not sucrose, extending the evidence that MOR signaling is involved in hedonic processing and raising the possibility that the caloric contribution of sucrose might explain why MOR KO mice did not show reduced levels of sucrose intake.
Alternatively, sucrose consumption may not be affected in these animals due to activation of other intact pathways associated with reinforcement, such as the mesolimbic DA pathway or other opioid receptors. Interestingly, Castro and Berridge recently identified a subregion located in the rostrodorsal quadrant of the medial shell of the NAc as a hedonic “hotspot” as injections of mu-, kappa-, and delta agonists in this area specifically (relative to the other three quadrants of the medial shell) lead to greater intensity of positive hedonic reactions to sucrose.
A common behavioral marker of reward is the degree of craving a substance elicits. In animal models, craving can be assessed using a paradigm in which animals are trained to self-administer a rewarding substance and their responses are measured at two points during abstinence: very soon after the substance is removed and again after a prolonged period of abstinence.
During extinction, animals are motivated to respond either in the presence of or for the delivery of cues previously associated with the reward.
Enhanced responding for cues at the later time point in abstinence has been noted following exposure to drugs of abuse, such as cocaine, as well as sucrose, a phenomenon termed “incubation of craving” . Recent work shows age-related differences in incubation of sucrose craving, with adult and adolescent, but not young adolescent, rats demonstrating greater responding after the extended extinction period .
These behavioral findings were accompanied by reductions in 2-amino-3-(3-hydroxy-5-methylisoxazol-4-yl) propionic acid/N-methyl-D-aspartate (AMPA/NMDA) ratios, a proxy of synaptic plasticity, in the NAc. Though these data are correlational, taken together, this suggests that age-dependent reductions in synaptic plasticity during abstinence from sucrose may contribute to enhanced craving of sucrose.
This is in contrast to findings that show greater synaptic plasticity during incubation of craving for cocaine , suggesting different mechanisms underlying this phenomenon depending on the rewarding substance.
In an effort to dissociate the relative roles of the two monosaccharides that comprise sucrose (fructose and glucose) in reward and/or satiety, Rorabaugh, Stratford employed a 12-h intermittent access paradigm, similar to that used in our laboratory , to promote bingeing on isocaloric solutions of fructose, sucrose, and glucose.
During the first hour of access, when bingeing is typically most robust, rats bingeing on glucose consumed significantly less than those given access to fructose or sucrose. This may indicate that the glucose solution was perceived as less palatable relative to the other caloric sweeteners, or, alternatively, that it is perceived as more reinforcing and therefore, less may be needed to experience a similar rewarding effect. As mentioned earlier, it is also possible that glucose may have been more satiating, resulting in less intake, given findings indicating this to be true in humans .
Interactions between “Homeostatic” and “Hedonic” Neural Systems
Recent evidence illustrates interactions between the homeostatic and hedonic systems in response to caloric sweetener intake. For example, prolonged fructose bingeing, elicited by an intermittent access paradigm, led to reduced neuronal activation (measured by c-Fos immunoreactivity [IR]) in the NAc shell and activated orexin neurons, which have been associated with both reward and satiety, in the lateral hypothalamic (LH) /perifornical area of rats .
It is postulated that this unusual pattern induced by fructose aligns with a feeding circuit proposed by the Kelley lab, in which the ventral pallidum (VP) forms a hyperphagic circuit that indirectly inhibits the NAc shell to activate the LH . This pattern is seemingly unique to fructose ingestion and further study is needed to understand how this circuit may interact with more established pathways.
This study also found that pretreatment with an orexin 1 receptor antagonist reduced feeding in both fructose- and chow-bingeing rats, suggesting that orexin 1 signaling is involved in food intake that is motivated by caloric need as opposed to palatability. However, only chow-bingeing rats showed reduced neuronal activation in the NAc shell, LH/perifornical area, or ventromedial hypothalamus in response to this manipulation .
Three recent studies approached this subject by introducing agents that typically act as homeostatic mechanisms into reward-related areas exogenously. In one such experiment, NPY increased the motivation to respond for sucrose when infused into the ventral tegmental area (VTA) or NAc and increased sucrose consumption when infused into the NAc or LH .
Interestingly, the effect of NPY in the VTA was attenuated following pretreatment with a DA receptor antagonist, suggesting that this effect is dependent on changes in DA signaling. In another study, injection of melanocortin receptor agonists, which customarily decrease food intake, into the VTA decreased sucrose and saccharin intake as well as overall food intake .
Another study found injection of orexin into the posterior VP, a region considered to be a “hedonic hotspot,” to enhance positive hedonic reactions to sucrose . Given that the VP receives orexin projections from the LH, the authors propose that during negative energy balance, orexin projections may magnify the pleasure derived from food.
Though it remains unclear exactly how regulatory mechanisms like NPY, melanocortin, and orexin influence hedonic mechanisms under normal conditions, these findings offer compelling evidence of interactions between these two systems, which are frequently conceptualized disparately.
Effects of Low Calorie Sweeteners on “Homeostatic” and “Hedonic” Neural Systems
Despite their widespread use, we are only beginning to understand the effects of low calorie sweeteners on the brain. Research does show that the human brain is capable of dissociating sweet taste from calories [56, 57]. Laboratory animal research is beginning to elucidate the effects of low calorie sweeteners on select homeostatic and hedonic neural systems and their effect on feeding behavior.
Both melanin-concentrating hormone (MCH) and orexin promote feeding [58–60]. A recent study measured phosphorylated cyclic AMP response element binding protein (pCREB), a marker of neural activity, in both MCH and orexin neurons of fasted rats in response to glucose, saccharin or water. While only glucose reduced pCREB expression in MCH neurons in all rats, both glucose and saccharin, but not water, significantly reduced pCREB expression in orexin neurons of female rats .
Similarly, binge consumption of either sucrose or saccharin leads to reduced orexin mRNA expression in the LH of mice . It should be noted that although low calorie sweeteners do not provide calories, their consumption can lead to gastric distension, which has been shown to lower IR expression of orexin in the LH , making it important for future studies to control for this, perhaps using paired water intake.
Notably, reduced sucrose- and saccharin-bingeing have been observed following treatment with an orexin receptor 1 antagonist , which appears inconsistent with the notion that orexin receptor 1 signaling mediates feeding driven by caloric need versus palatability mentioned earlier .
Several studies have investigated whether the caloric contribution of sweeteners influences their rewarding properties. For example, Aoyama et al. assessed responding for a saccharin-related cue in rats during prolonged abstinence from the solution as a measure of craving and seeking behavior. Indeed, responding was significantly greater with greater abstinence, demonstrating that saccharin is capable of eliciting an “incubation of craving,” similar to what has been seen with sucrose and cocaine .
In fact, there was no difference between the magnitude of the incubation of craving for saccharin and sucrose . Additionally, similar to sucrose, limited access to saccharin has been shown to induce excessive binge eating .
In food-restricted mice, preference for a non-caloric blend of saccharin and sucralose surpassed that for fructose, but not sucrose and glucose . Taken together, these studies provide behavioral evidence that sweet taste, independent of caloric content, is sufficiently rewarding to motivate feeding and seeking behavior.
Under conditions of caloric deficit, recent studies demonstrate an important role for the post-ingestive effects of caloric sweeteners in food reward and preference. While both sucrose and saccharin-related cues evoked a sharp increase in DA within the NAc core of food-restricted rats, both sucrose-related cues and consumption resulted in a significantly greater DA response relative to saccharin .
In a recent study conducted in ad libitum and food deprived rats, saccharin and sucrose led to different responses based on physiological state. Unsurprisingly, food deprived rats significantly increased responding for sucrose compared to saccharin, whereas non-food deprived rats showed comparable efforts to obtain sucrose or saccharin .
Consistent with this, habituation of DA in the NAc was seen in response to both types of sweeteners in non-food deprived animals, whereas habituation was only seen in response to saccharin among food deprived animals .
In one study using intragastric infusion of glucose or saccharin in awake, fasted rats during functional magnetic resonance imaging (fMRI), glucose lead to greater blood oxygen level dependent (BOLD) activation in several brain regions, including key components of the mesolimbic DA pathways (e.g., the VTA and NAc), compared to saccharin. Moreover, glucose, but not saccharin, evoked a BOLD response in the hypothalamus .
Thus, when bypassing the taste pathway via intragastric infusion and in a fasted state, a caloric sweetener led to more pronounced activation of both hedonic and homeostatic regions.
Although the focus of this review is on recent studies using animal models, human studies that are particularly relevant warrant discussion. Recent studies in humans suggest that repeated low calorie sweetener consumption alters brain responses to caloric sweeteners. Subjects who reported higher low calorie sweetener intake showed a reduced BOLD response in the amygdala in response to sucrose .
Additionally, Green and Murphy found that relative to non-diet soda drinkers, individuals who consumed diet soda regularly showed greater activation in the VTA as well decreased activation in the right caudate in response to saccharin.
In contrast to these findings, Griffioen-Roose et al. did not observe a difference in hedonic value, measured by both behavioral tasks and fMRI, between participants with repeated exposure to low calorie sweeteners and sugar-sweetened beverages suggesting that low calorie sweeteners do not modify reward value (though subjects who exclusively consume “light versions” of foods and beverages were excluded). Finally, conditioning with low calorie sweeteners or sugar-sweetened beverages led to similar reports of expected fullness following consumption, leading the authors to conclude that low calorie sweeteners may, in fact, be advantageous for weight management.
To this point, recent meta-analyses show that although observational, prospective studies show a small positive association between low-calorie sweetener use and BMI, randomized control trials suggest slight but significant benefits of low-caloric sweeteners substitution for weight loss [75,76]. | <urn:uuid:265e1480-5d3a-4a4c-b619-882449b79d15> | CC-MAIN-2022-40 | https://debuglies.com/2019/11/17/how-does-sugar-affect-the-brain-and-the-body/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00777.warc.gz | en | 0.942544 | 5,124 | 3.40625 | 3 |
Originally thanksgiving was formed in New England and Canada by colonists who gave prayers of thanks for such blessings as safe journeys, military victories, or abundant harvests. Americans model the holiday on a 1621 harvest feast shared between English colonists and the Wampanoag. Canadians traced their earliest thanksgiving to 1578, when a Martin Frobisher-led expedition celebrated safe passage.
Thanksgiving Day, an annual national holiday in the United States and Canada, celebrates the harvest and other blessings of the past year.
Thanksgiving is all about reflecting on blessings and acknowledging gratitude. After all, in President George Washington’s 1789 Thanksgiving Proclamation, he stated about its purpose:
“Whereas it is the duty of all Nations to acknowledge the providence of Almighty God, to obey his will, to be grateful for his benefits, and humbly to implore his protection and favor—and whereas both Houses of Congress have by their joint Committee requested me ‘to recommend to the People of the United States a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness.’“https://founders.archives.gov/documents/Washington/05-04-02-0091
The holiday itself is steeped in an attitude of gratitude, so on this holiday here in the US, I would like to encourage you to think about all those things you are thankful for. Especially with the past few years of the pandemic, where we were away from family and those we love.
It doesn’t where you live, your race, your religion, shoe size, hair color; it is all about just being grateful for what you have, whether that is a lot or a little. I wish you all a Happy Thanksgiving, no matter where you are. | <urn:uuid:a329f274-906e-4f32-8281-b65ea546564c> | CC-MAIN-2022-40 | https://helloitsliam.com/2021/11/25/happy-thanksgiving/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00777.warc.gz | en | 0.961139 | 394 | 3.5 | 4 |
With working environments in airports becoming dynamic, embedded sensors are supplying advanced capabilities to ensure safe operations.
FREMONT, CA: Airports are getting busier day by day. As a result, ground operations have also become complex. Technology-enabled smart management mechanisms are helping drive efficiency into various tasks in airports. However, these mechanized systems are not equipped to respond to sudden changes. For instance, a slight change in the schedule or the absence of one worker can result in a lot of chaos. The chaos, in turn, can compromise the safety of personnel present at the site, or passengers making their way through the airport. In such a scenario, embedded sensors can make a lot of difference. The potential applications of embedded sensors in fostering improved ground safety in airports are enlisted below.
• Detecting Barricades and Obstructions
With embedded sensors, one can enable obstruction detection in various moving components at airport aprons and runways. The use of LIDAR systems that emit laser light and detect the reflected rays to create 3D renderings of the physical environment can uplift the operation safety at airports. LIDAR uses a combination of embedded sensors and creates alerts upon the detection of obstructions. Such systems empower ground staff to avoid blind spots while driving vehicles and minimize the chances of collisions.
• Effective Communications
Synchronized operations can contribute to ground safety in busy airports. With effective means of communication between vehicles and equipment, synchronized operations become convenient. Such communications can be achieved with the help systems that use a combination of embedded sensors and 3D mapping software. While embedded sensors help track down every object present on the ground, 3D mapping can provide information regarding possible interference even before they occur. Gyro sensors embedded into ground maintenance vehicles also play a role in facilitating the process by providing an accurate location of the vehicles.
Mitigating collisions at airports might get more complicated as autonomous capabilities become more common. However, embedded sensors can add plenty of cognitive capabilities into systems and enable intelligent and automated management of aprons and runways, without any compromises on safety.
See Also: Top Embedded Design & Solution Companies | <urn:uuid:91ee4f96-8a63-4f52-927f-bc3c9265702f> | CC-MAIN-2022-40 | https://www.enterprisetechnologyreview.com/news/how-embedded-sensors-foster-ground-safety-in-airports-nwid-727.html?utm_source=google&utm_campaign=enterprisetechnologyreview_topslider | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00777.warc.gz | en | 0.925572 | 431 | 2.6875 | 3 |
What is PII?
Personally Identifiable Information (PII), or personal data, is data that corresponds to a single person. PII might be a phone number, national ID number, email address, or any data that can be used, either on its own or with any other information, to contact, identify, or locate a person.
How PII is determined?
In response to businesses collecting and storing more and more individuals’ PII (also known as personal data), individuals and regulators have been applying greater scrutiny to how businesses use and safeguard that data. As a result, various jurisdictions have passed legislation to limit the use, distribution, and accessibility of PII, while allowing companies who need it to manage the data safely.
As PII (or personal data) is a legal concept rather than a technical concept, legislation around PII varies across different jurisdictions. The GDPR in the European Union, HIPAA, and PCI in the United States, state laws like CalOPPA and other data breach laws, and other regulations control what defines PII. Which data is classified as PII may also differ by use case. For instance, depending on the jurisdiction or your use case, IP addresses may or may not be considered PII.
How Blitzz manages PII?
Blitzz takes the management of our customers’ information seriously. We have software, configurations, processes, and guidelines for managing data internally to keep your data safe and secure. Inside Blitzz' systems, we manage data that could be PII in different ways.
- Blitzz is committed to making clear which data is managed as PII in our system to help you make sure your data is managed the right way for your jurisdictions and use cases.
- Blitzz has a Data Protection Addendum which extends the specification of your legal relationship with Blitzz and can help clarify how Blitzz manages data on your behalf.
- If you are in Europe, this document clarifies how we manage data where some parts of your data may originate in Europe. Note: While you may not be in Europe or a phone number may not be European, the person at the other end of the phone could be a European in Europe.
Powerful features like Blitzz' Phone Number redaction, Email Address redaction, and Call Recording Encryption allow you to remove PII or encrypt it so no one can see it but you.
Blitzz manages PII in Blitzz' documentation as though they contain PII, also known as personal information or personal data (Eg. Host login email address, Guest Phone Number, Email Address). This means that the Blitzz engineering team implements appropriate technical and organizational security controls as appropriate to the risk associated with that data.
For example, data will not be visible to Blitzz' employees unless they are acting as a surrogate for you (e.g., debugging on your behalf, with your permission) or have some other legitimate businesses need to access it. As well, values are anonymized or removed when we need to hold on to information for statistical analysis, reporting, and capacity planning - none of which require the PII itself. Names, your end users’ phone numbers, or recordings of video calls and chats are all examples of fields that Blitzz treats as containing PII. Phone numbers that are used to send SMS messages, whether a long code or shortcode, because they are owned by Blitzz, are managed differently from non-Blitzz numbers.
PII management when you leave Blitzz
When you leave Blitzz following a reasonable grace period to allow you to change your mind, all PII data is anonymized or scheduled for deletion from Blitzz' systems where possible after 30 days.
Non-PII fields (Eg. Reference Field) are stored in Blitzz and may be used for counting or other operations as Blitzz runs its systems. These fields generally cannot be redacted or removed.
In some instances, you might be able to control the data in these fields. You should take care not to place PII in fields with this designation. Blitzz does not treat this data as PII, and its value may be visible to Blitzz employees, stored long-term, and may continue to be stored after you’ve left Blitzz' platform.
If you think you need to put PII in these fields, please check with our support team to see if there’s a better way to manage your data. | <urn:uuid:0cf32e2f-6526-4287-81c4-58706986d27c> | CC-MAIN-2022-40 | https://help.blitzz.co/en/support/solutions/articles/44002121698-personally-identifiable-information-pii- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00777.warc.gz | en | 0.926255 | 946 | 2.578125 | 3 |
What is an IGOE?
IGOE is an acronym for Input, Guide, Output, and Enabler. These are the basic components of any business process. The IGOE concept, like most things, is not completely new. It is based on the first detailed method for capturing and documenting process information. The original technique was called IDEF0, developed by the U.S. Department of Defense for the purpose of capturing the process of manufacturing fighter planes. Until this point there was no consistent structure or method for documenting processes. Since IDEF0 was created for the manufacturing industry, the terms and definitions developed were appropriately related to manufacturing. IDEF0 called the components of a process 'ICOMs'. ICOM is an acronym for Input, Control, Output, and Mechanism.
Since the "IGOE" concept was created for the purpose of documenting service-oriented processes, the terms and definitions were adapted to fit into service sector business. Considering that a significant number of the organizations in the world are service sector type organizations, it makes sense to have an approach for documenting and understanding process that is intentionally geared toward services.
Figure 1 gives an example of an IGOE template.
Figure 1. Example IGOE template.
Strict adherence to the definitions of each process component is critical if the IGOE technique is used to enhance process understanding and analysis.
An Input is defined as something that is transformed or consumed. The Inputs are not just things that flow through process/activity but are things that change or undergo a transformation. Hopefully, that transformation in some way adds value. Transformation types include physical, locational, and informational. Physical transformation occurs when something is changed in a literal, physical manner — for example, taking raw materials and transforming them into finished goods. Locational transformation occurs when the physical location of something is changed — for example, moving goods in a warehouse or moving goods from one warehouse to another location. Informational transformation occurs when the information is changed — for example, when information is created, updated, or deleted.
A Guide is defined as anything that describes the when, why, or how a process or activity occurs. Therefore the events that determine when a process begins or when it ends are classified as Guides, as well as all the policies, regulations, and any standards requirements. In addition, any reference information or data is considered to be a Guide, as well as any knowledge or experience used to help determine how the process/activity should occur.
Some examples of Guides include:
- Any type of starting and completion event related to the process, such as receiving or sending things or information,
- Any type of knowledge or experience needed to perform the activities in the process,
- Business Policies,
- Business Rules,
- Acceptance / Completion Criteria,
- Performance Targets,
- Performance Criteria,
- Laws or Regulations,
- Any type of information used as reference material during the process,
Outputs are normally straightforward. Outputs are the product or result of the change that occurs to the Inputs or the result of the creation of something based on the Guides.
Finally, Enablers are the resources or assets required to transform an Input into an Output or to create Outputs. These resources include the people, systems, and tools and facilities utilized by an activity or process. Enablers include:
- Human Resources,
- Any type of reusable resource necessary.
It's critical for a complete understanding and analysis of process that these definitions are consistently applied. Without consistency, the understanding and accuracy of the analysis of the process will be compromised.
Example — Make a Peanut Butter and Jelly Sandwich
Figure 2. Make a Peanut Butter and Jelly Sandwich
In Figure 2 — the example of making a peanut butter and jelly sandwich —the ingredients are the Inputs because they go through a physical transformation and become a sandwich. The hungry child can be considered an Input because the child goes through a transformation from hungry to happy. The emotional change is not usually documented unless it is critical to understanding the process.
The recipe and the child preferences are Guides because they provide the criteria for how to make the sandwich. Experience is also a Guide and is one of the Guides that organizations often forget but that can be critical to success when that experience leaves the organization. The events of "hungry child" and "satisfied child" are Guides because they indicate when the activity begins and ends, setting the boundaries.
The Enablers are the Mom/Dad because we need someone to make the sandwich. The knife, fork, and spoon are Enablers because they are the tools used to access the ingredients. The main Output is the sandwich, which is the product of the ingredients.
Example — Fill Car with Gas
Figure 3. Fill Car with Gas
In Figure 3 — the example of filling the car with gas —the car and money are Inputs and Outputs because they both go through a transformation: the car from dirty to clean, the money changes in quantity.
The events of "pull up to the gas pump" and "drive away" set the boundaries for the process and guide when the activity begins and ends. The criteria for how much gas is put in the gas tank include the tank capacity, gas filling rules, budget, and limit on charge or amount of money. The Guide for what kind of gas is the grade of gas.
The Enablers are the person who puts the gas in the car, the gas pump that provides the means for getting the gas into the car, and the gas station that provides the facility for storing the gas until it's used. These are Enablers because the car could not be filled with gas without these resources or assets.
Example — Pay Cell Phone Bill
Figure 4. Pay Cell Phone Bill
In Figure 4 — the example of paying a cell phone bill — the Excel spreadsheet and the cell phone bill are Inputs and as well as Outputs. This activity has an additional Output of payment. This is an example of an Output that has been created based on the Inputs and the Guides.
The starting and ending events are Guides because they indicate when the activity begins and ends. The business rules for paying the phone bill are contained within the various company policies. The cell phone bill is also a Guide because some of the information on the bill determines whether the bill gets paid or is rejected, and the Excel spreadsheet is a Guide because it serves as the set of requirements for the information that management wants captured on each cell phone bill.
The Enablers are the resources that fulfill roles in the activity. The Excel spreadsheet is an Enabler as a template that needs to be filled out, and the PC is an asset that has Excel loaded onto it.
The So-What Test
When advocating one particular approach over another, that approach needs to pass the "so-what" test — simply stated, "Why does it matter?" To avoid falling into the trap of doing something differently just to do it differently, it's important to understand the advantages of using the IGOE approach to capture and to analyze processes.
Separating the components of a process into Input, Guide, Output, and Enabler allows the analyst to focus on specific aspects of a process. For example, by strictly adhering to the definition of an Input as something that gets changed or transformed in some manner, an analyst can very clearly determine where value is created in the process, since value can only be created through a transformation of some kind. It also provides very clear definitions of where information is created, referenced, updated, and deleted (CRUD).
Statistics from research and practice indicate that about 80-85% of the opportunities to significantly impact the performance of a process result from changes to the elements in the Guide category. Therefore, it becomes very important to focus on the information in this category as one of a project's highest priorities. This is information that is often ignored or sidelined as unchangeable since it includes policies and procedures that have often become so ingrained in an organization that they actually represent the culture of the organization. Yet, when carefully and objectively analyzed, it is changes to a company's policies that can generate literally thousands of percent improvement in the efficiency of processes — for example, a reduction in cycle time from 15 days to 45 minutes for the payment cycle, or 10 days to 2 days for the installation of a new server. However, when this information is cluttered together with elements of Inputs and other types of data such as systems information or data storage references, it becomes lost and is normally not exposed in the understanding or analysis of the process. The net result is that organizations are often examining, analyzing, and improving only 15% of their opportunities, leaving 85% on the table untouched. This is like someone giving us a dollar in savings and we take only 15 cents and leave the other 85 cents untouched.
Separating the components of a process into the four categories of Input, Guide, Output, and Enabler also provides an analyst with the opportunity to examine more closely the relationships between these elements. For example, for every Enabler there should be an associated Input, Guide, or Output. If not, then why not? For every Input there should be an Output or an understanding of what happened to make the Input disappear. For every Output there should be an Input or a Guide — if not then how did the Output get created? From a measurement perspective it's very useful to examine the measurements for the Inputs verses the Outputs. For example, in the process of educating students, if the number of students coming out is less than the number of students going in, what happened? Where did the education system lose them and why? The awareness of this difference by a superintendent of a large school district led to several changes in the processes of his school system. The result was a reduction in this number. In addition, his school was presented with the Baldridge award for process excellence.
Using the IGOE approach is not about classifying for the purpose of classifying or pigeon holing information. It's about understanding the processes better and being able to analyze the processes for opportunities more easily and effectively. It's about making incredible improvements to processes that most organizations miss. It's about getting truly incredible results and payoffs from the investment of time and resources allocated to process improvement.
# # #
About our Contributor:
All About Concepts, Policies, Rules, Decisions & Requirements
We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions. | <urn:uuid:84e259ca-1eeb-4a77-9aad-d72672bd5f7d> | CC-MAIN-2022-40 | https://www.brcommunity.com/articles.php?id=b634 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00777.warc.gz | en | 0.949085 | 2,238 | 3.328125 | 3 |
PIM is the generation of interfering signals caused by non-linearity in the mechanical components of a wireless system. Two signals mix together (amplitude modulation) to produce sum and difference signals and products within the same band, causing interference. Passive Intermodulation has become new benchmark in determining the health of a cell site. Today’s mobile handset users expect consistent high throughput from their devices and, consequently, push current networks to their limit.
The upcoming next generations networks feature an increased mobile data rate of 100 Mb/s and this higher transmission rate will expose PIM vulnerabilities in today’s networks like never before.
4G Has Different Needs
Fourth generation FDD networks require superior network transmission fidelity, higher than previous ones.
MNO are now facing the challenge to maintain customer loyalty in an unforgiving competitive arena. As such, good network PIM performance is now imperative! Although important, these PIM sources can easily be resolved with a regular cell site transmission line maintenance and, of course, by giving the right training to site Engineers, about high quality installation.
Descriptively, Passive Inter-Modulation is an undesired, nonlinear, signal energy generated as a bi-product of two or more carriers sharing the same downlink path in wireless networks.
Due to network hardware configurations, this multi carrier interaction can cause significant interferences in the uplink receive band, which can lead to reduced receiver sensitivity.
To the mobile phone user, this is often translated to a loss in audio fidelity in conversations, decreased data speeds, or in extreme circumstances, dropped calls or an inability to make or receive calls or utilize data services.
Since there is a mathematical correlation between the known carrier frequencies and the resultant interference signal in the receive band, accurate measurements of PIM signals can be achieved consistently. For practical PIM testing applications, we will only concern ourselves with those PIM signals which interfere directly with our network’s receive band. Typically these PIM signals are:
3rd order PIM, = 2xf1 – f2
5th order PIM, = 3xf1 – 2xf2
Since Passive Intermods cannot be mathematically modeled and cannot be simulated using today’s engineering design tools, using a PIM analyzer is the only way to quantify it.
What causes PIM?
Ferromagnetic materials, when in the current path, exhibit a non-linear voltage to current ratio. This non-linear effect is accentuated at higher power levels because of increased current density.
Looking at Ohm’s law from the perspective of “Power” helps clarify the fact that the squaring effect of current results in a higher magnetic flux, which makes metals with high bulk resistivity, such as, iron, steel and nickel exhibit a magnet like memory effect.
This effect is better known as magnetic hysteresis
Metals that exhibit this asymmetrical magnetic flux are often the main contributor of PIM energy.
Poor metal to metal contact junctions can create additional nonlinearities resulting in PIM. Such nonlinearities can come from under-torqued male to female DIN 7/16 mates, as well as irregular contact surfaces such as poorly manufactured connectors and surface metal oxidation. Oxidation (corrosion)
CCI’s PIM-Pro 850 analyzer has a default setup with two transmit frequencies at 869 and 894 MHz, producing a 3rd order IM at 844MHz and a 5th order IM at 819 MHz.
In this example, the focus would be on the 3rd order IM at 844 MHz since it falls within the receiver range of 824 to 849 MHz. The 5th order IM at 819 MHz is outside of the receiver range and, as such, can be ignored for the purposes of PIM testing.
It is important to observe that the actual IM frequency is determined by the two transmit frequencies and the spacing between them. A 25 MHz frequency spacing between the transmitters also results in a 25 MHz spacing between the IM signals.
Typically, the 3rd and 5th order PIM signals are the most likely to fall within the receive band with enough PIM energy to cause disturbances, while 7th and 9th order PIM signals are usually very low in power. [/one_half][one_half_last animation=”Fade In From Right” delay=””]CCI’s PimPro Passive Intermod Analyzer allows you to select which order PIM you want to measure and highlights the ones that fall in the receive band for simplicity.
It should be noted that PIM signals exist as a result of the combined transmission of multiple carrier frequencies within a transmission line path. The objective is to ensure that these levels, by design and in practice, should occur at an amplitude which is below the Base Stations receiver sensitivity.
The amplitude of these undesired signals is directly influenced by the fidelity of the transmission line path, including all components and junctions that can introduce a non-linear effect to the signals passing through them. laroccasolutions is proud to offer his specialized Engineers for PIM Site Optimization Service using CCI PIMPro Tester.
PIMPro Tower series
- Available LTE 700, Cellular 850, GSM 900, GSM 1800/UMTS 2100, PCS 1900/AWS 2100 & 2600 MHz models
- Real Time PIM, Return Loss measurements and distance to PIM
- 40W PIM sensitivity -135 dBm
- Lightest full power unit less than 19 lbs in a durable backpack enclosure with over three hours of battery life
- Simultaneous Real Time PIM & Return Loss measurements
- Smart phone app Wi-Fi remote
- Automatic GPS site location
- Integrated DAS test feature
- New Distance to Fault feature allows for simultaneous view of PiMPoint and Distance to Fault impedance reflections on the same graph
- New PiMPoint feature (integrated) allows distance approximation to largest PIM source, in 50 Ω path and outside the antenna
PIM Non-Linearity Analysis
PIM non-linearity increases, in theory, at a ratio of 3:1 (PIM to signal). A 1 dB increase in carrier power correlates to a theoretical increase of 3 dB in PIM signal power.
In practice, the actual effect is closer to 2.3 dB as the thermal noise constant -174 dBm/ Hz becomes an error contributor.
This thermal noise floor gets closer to -140dBm as PIMs are measured in a narrow IF filter which allows the noise level to increase at a theoretical 10 dB/decade.
This -140 dBm floor is considered a PIM analyzer’s residual IM level
40 Watt Vs 20 Watt
In order to better represent real traffic network conditions, PIM measurements should be performed at the BTS radio power level or slightly higher.
In the last several years, a handful of 2 Watt PIM analyzers have entered the market place touting their benefits as being, smaller, more portable, and conveniently battery operated.
Although these features are obvious, these units offer limited value since 2 Watt PIM testing is not representative of typical BTS power levels of 20 Watts or higher, where PIMs are likely to be generated. PIM testing, when measured in dBc, is a measurement of relative nonlinearity.
Network operators wants to be confident about their network while under the real traffic stress
Networks engineers want a confidence buffer in their power range where PIM begins to show non-linearity. Although most of today’s BTS units output 20 Watts, the new RRU technology (roof top or tower top radios) is now at 30 or 40 Watts and in some cases even higher power levels.
Network operators need to question whether testing at 20 watts (43 dBm) is satisfactory, as it may not expose marginal network PIM conditions. This is the main reason why CCI engineers designed the PIM-Pro PIM analyzer family with 40 Watts of output power.
Above graph displays actual PIM measurement results of a load. Note the slope of the red (PIM dBm) and green (PIM dBc) compared the 2 tone signal power. Also note that there is hardly any measurable non-linearity in the 2-10 W power range, due to lack of PIM generating power.Measurement recommendations
Due to their low power levels, less than -80dBm, PIM measurements are difficult to make with good accuracy in the best of lab conditions, let alone the harsh conditions that can be experienced at actual cell sites. A valid and repeatable PIM measurement requires an analyzer with stable linear amplifiers, exceptionally low PIM duplexer and related components, and a well-designed receiver with a very low receiver noise floor. The CCI PIM-Pro with a residual IM level of <-140dBm is well suited to perform PIM measurements in this regard.
Since Passive Intermodulations cannot be mathematically modeled and cannot be simulated using today’s engineering design tools, using a PIM analyzer is the only way to quantify them.
Recommended measurement practice includes:
- Visually inspect and clean all connectors before mating them.
- Torque all connections to a minimum of 16, and maximum of 18 ft-lbs (23-24 Nm).
- Allow measurements to thermally stabilize, especially in cold weather. Use PIM vs Time mode at highest available power (40W on PIM-Pro) to establish confirmation of a stable measurement using a low PIM load on the test port.
- In order to maintain measurement confidence, regularly verify measurement accuracy using a quality load and a PIM Standard – a calibrated device which looks like a connector, and is designed to deliberately generate a known PIM (usually -100dBm), which can be used to check that the PIM analyzer is measuring correctly. Using a quality low PIM load will confirm faulty components.
- Due to the non-linearity of the PIM response it is wise to test at higher power levels than necessary to ensure an acceptable measurement error margin.
- Use higher power to confirm marginal measurements as 2 x 20W-tone PIM testing is often not enough power to uncover a marginal PIM situation. Higher power, 2 x40W testing provides additional field diagnostic capability.
The DIN 7-16 connector is rated for 500 mates, although the connector can probably survive up to 1,000 mates, it is important to be cognizant of the constant wear and tear on cables and the PIM tester’s output connector. In the world of RF measurements, problems often start in the components used to perform the measurement at hand. Test cables are typical culprits. | <urn:uuid:e6af75bf-0448-4b7b-99c0-fd66dc5e29f7> | CC-MAIN-2022-40 | https://arimas.com/2015/05/09/pim-passive-intermodulations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00177.warc.gz | en | 0.923177 | 2,233 | 2.578125 | 3 |
Lidar vs. VoxelFlow: What’s the Difference?
Over the past decade, Lidar and camera-based technologies have been at the helm of current autonomous driving (AV) systems paving the way for safe driverless cars. But as smart cities and autonomous driving become a fast-approaching reality, new technologies are finding ways to improve industry standards, ensuring we have a realistic and, most importantly, safe path to the future of AV.
Though Lidar has become the standard for AV companies like Alphabet’s Waymo, its low latency has a long way to go. If the AV industry wants to make true autonomous driving a reality, updates to safety standards will need to be prioritized. That’s where VoxelFlow comes into play, an event-based 3D sensor system designed specifically with autonomous driving in mind.
The Old Industry Standard: Too Little Too Late
Lidar sensors have been popping up across a multitude of industries, using Light Detection and Ranging to make its mark on agriculture, weather machines and even iPhones. Its sensors use invisible laser beams to detect objects within its range; and in the case of AV, Lidar is currently used to detect obstacles in front of moving vehicles. Originally designed for satellite tracking, Lidar excels at distances beyond 30 to 40 meters and is extremely fast when compared with the human eye. But when operating within range, or crashing distance, of an approaching object, Lidar faces limitations in speed and distance. Similarly, automotive camera systems (often used in conjunction with Lidar systems) operate at 30 frames per second (fps), which results in 33-millisecond process delay per frame. These series of delays can add hundreds of milliseconds to a vehicle’s detection and reaction times, having potentially disastrous effects in dense urban environments.
Lidar’s and camera-based technologies’ parallel, frame-based approach to detection allow the systems to fall victim to the speed limits of perception and reaction time, with the limited ability to recognize only 33,000 light points per frame, a mere fraction of what’s possible with new technology. In short, this approach is nowhere near fast enough, and we need faster sensors if we want to stand a chance at realistically tackling the constant variables drivers contend with on a day-to-day basis.
Where VoxelFlow’s Motion Perception Tech Excels
Though Lidar shines at greater distances, VoxelFlow’s low latency is designed to cater to object detection within crash-range. Where a human driver sees an oncoming object approaching and is able to gradually slow down their approach, VoxelFlow will be able to react to sudden, abrupt obstacles with a speed humans and Lidar are simply incapable of. Meanwhile, if weather conditions aren’t optimal or the object’s color doesn’t contrast enough against its background, a camera system may not even detect the obstacle at all.
When you consider Lidar’s broad origins in government and enterprise spaces, it’s not surprising that its accompanying cost is higher, as well. The systems can cost anywhere from a couple thousand to hundreds of thousands of dollars, most of which is fronted by the end user. And with new premium models constantly hitting the market, it can be hard for users to keep up. Simply put, Lidar was not designed with the average consumer in mind.
Lidar can be bulky. There’s no way around it. The system needs to be placed somewhere on the vehicle where it can have a clear sight line. And though some luxury vehicle owners might insist they don’t mind a rather large box being placed on the roof of their BMW, it definitely isn’t going to be the best look in the long term.
VoxelFlow to Drive the Future of Autonomous Vehicles with Precision & Speed
VoxelFlow’s revolutionary technology will enable vehicles to classify moving objects at extremely low latency using very low computational power. The tech can produce 10 million 3D points per second, as opposed to only 33,000, and the result is rapid edge detection without motion blur. With VoxelFlow, AV will be able to navigate the road with more precision than a human, as its ultra-low latency allows it to accelerate, brake or steer around objects that appear abruptly in incomparable time.
VoxelFlow will empower the next general level 1-3 advanced driver-assistance systems (ADAS) safety performance while also realizing the promise of truly autonomous L4-L5 AV systems. This critical tech will consist of three event image sensors distributed throughout the vehicle and a centrally located continuous laser scanner that provides a dense 3D map that is able to tell the difference between a stationary mailbox and a puppy running across the road. And with its significantly reduced process delay, it is geared to meet the actual real time constraints of autonomous systems.
VoxelFlow’s sensor system will automatically and continuously calibrate to handle shock, vibration and blur resistance while also providing the required angular and range resolution needed for ADAS and AV systems. It will also perform well in adverse weather conditions compared to lidar systems that degrade in these conditions due to excessive backscatter.
Where Lidar works well at ranges beyond 40 meters – even though it costs a fortune – VoxelFlow is the complementary technology Lidar systems need. When used together, we can significantly improve detection and most importantly, enhance safety and reduce collisions, paving the way for a future where drivers can have confidence in the autonomous vehicles on the road. | <urn:uuid:7c19f7d8-35f3-40c4-add5-0706c0d6495c> | CC-MAIN-2022-40 | https://coruzant.com/smart-tech/new-event-based-tech-brings-advantages-to-autonomous-driving-market/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00177.warc.gz | en | 0.927332 | 1,157 | 2.65625 | 3 |
This article provides a brief overview of common Linux commands. If you are new to the Linux command line, there are lots of free resources for learning more about the topic. We quite like Linux Notes for Professionals, which is compiled from Stack Overflow documentation. However, a quick online search will return plenty of other guides and books.
We also got knowledgebase articles about more advanced command line utilities:
Print the name of the current directory (the “working directory”).
List all files in the current directory, apart from hidden files (“dot files”).
List all files, including hidden files.
Long listing (includes information about files and directories).
Long listing sorted list by date (newest file first).
Long listing sorted by size (largest file first).
Long listing sorted by size and in reverse order (smallest file first).
Long listing with files sizes in human-readable format (i.e. “1K” instead of “1024”).
Change the working directory to the home directory (
Change the working directory to the directory
Change to the parent directory.
Change the working directory to the the last working directory.
Create the file [file]. If the file already exists the timestamp will be updated.
Determine the file type. File extensions have no meaning in Linux. For instance, a zip archive doesn’t need to have a .zip extension. Similarly, a PHP script can have a .png extension (which is fairly common on compromised websites). The
file utility confirms the actual file type.
[destination]. For instance,
cp wp-config.php wp-config.php.bak makes a backup of the wp-config.php file.
Preserve file attributes such as the ownership and timestamps. Without this option attributes are overwritten.
Prompt before overwriting an existing file. By default
cp overwrites any existing destination file.
[file]. The file is permanently deleted – there is no wastebasket.
Prompt before removing a file.
Remove all files in the current directory. This does not remove directories.
[directory] and anything contained in it. Please tripple-check that the command you entered is correct – you won’t be the first person to accidentally delete important data.
[directory] and anything contained in it, and never prompt before removing a file.
[directory]. The directory is only removed if it is empty.
Create (make) the directory
Make the directory and any parent directories (if they don’t exist yet).
Print the contents of
[file]. Although the
cat utility is designed to concatenate files it is more commonly used to print the contents of one or more files.
Print the contents of
[file] using a pager. The pager is used to display the contents of the file one page at the time, which is useful if the file is very large.
You can use your arrow keys and the Page Up and Page Down keys to navigate up and down (Home and End also work), and you can search the file using a forward stroke (
/) followed by the search string (i.e.
/search_string). Search results are highlighted and you can use the
n key to move the cursor to the next match.
Print the first ten lines of
Print the first three lines of
Print the last ten lines of
[file]. As with
head, you can change the number of lines to be displayed. For instance,
tail -5 prints the last five lines.
Print the last ten lines of
[file] and follow any changes made to the file. Any lines that are added to the file are printed to the screen. This is useful to monitor log files.
[group] owner of
[file]. Note that the user and group are separated by a colon (
Recursively change the
[group] ownership of
Change the permissions of
[file]. The permissions are for the user, group and others, and the possible permissions are read (4), write (2), execute (1) or none(0). The numbers are the octal representation of these permissions. If you want a user to be able to read a setting.php file and deny all other permissions you can therefore use:
chmod 400 settings.php
[url]. For instance, you can download a robots.txt file to block naughty bots:
[url] and save it as
[output name]. The
-O option overwrites any existing file, which is useful if you want to, say, grab the latest robots.txt file:
wget -O robots.txt https://bitbucket.org/pajac2/robotctl/raw/master/block_list/robots.txt
[file]. The file is replaced, and the compressed file is named
[file] but keep the file that is compressed.
Print information about
[file.gz], such a the compression ratio.
[file.gz]. You can also use
Archive and zip
[directory] and give it the name
-v flag is optional; it prints verbose output to the screen, including all the files that are added to the archive.
[name.tar.gz] archive in the current working directory.
[name.tar.gz] archive to
Display the manual for
Display the executable for
Display the shell history (that is, commands that have been run). Each entry shows a number followed by the command that was executed.
Run command number
Run the last command in the history. If you ran a command that required sudo privileges you can repeat the command by using
Delete entry number
[n]. This is useful if you for instance included a password in a command (such as mysql command). | <urn:uuid:b03ea2c1-9e8c-4bf2-9b7d-97ce433edc7c> | CC-MAIN-2022-40 | https://www.catalyst2.com/knowledgebase/server-management/basic-command-line-utilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00177.warc.gz | en | 0.848991 | 1,388 | 2.75 | 3 |
Comprehensive software reviews to make better IT decisions
Will AI Create the Coronavirus Vaccine?
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a SARS-CoV-2 vaccine. (COVID-19 is the name of the disease caused by the SARS-CoV-2 virus.) After all, AI is magic, right? Or, at least, this is a rather promising and very hot area of applied AI research, with 221 startups reportedly using AI for drug discovery, including several who have either discovered new compounds or created entirely new drugs using AI.
And yes, AI is being used in connection with COVID-19 and SARS-CoV-2: from understanding the structure of the virus’s spike protein (which attaches to and infects human cells) to genetically sequencing the virus to automating design and testing of virtual molecules with the desired characteristics for the development of a drug to treat it.
AI and machine learning are also being used to:
- Diagnose COVID-19
- Detect people with fever
- Predict the survival of those infected
- Disinfect public spaces
- Track the spread of the virus
- And many other things
Will we see a SARS-CoV-2 vaccine created with the help of AI technologies? It remains to be seen. The challenges do not seem to be with the AI tech, or at least not with the discovery side. Some experts say that the vaccine could be created in a lab in a matter of weeks. The challenge lies with the regulatory and manufacturing side.
First, the vaccine will need to get through clinical trials, which is a long process. Even an accelerated process may take 12 to 18 months to ensure the vaccine is safe and effective.
Also, millions of doses of the vaccine will be needed, and there is currently no established manufacturing process that could be adopted for the SARS-CoV-2 vaccine because “there are no vaccines on the market for any coronavirus, not even SARS or MERS … If this were the flu, manufacturing could be more easily spun up because the technology is in place to mass-produce the annual flu vaccine,” reports Consumer HealthDay.
So for now, wash your hands, maintain social distancing, monitor announcements by the health authorities in your area, ensure you have sufficient supplies for a few weeks if you need to stay at home, and don’t panic. We are all likely to get the coronavirus, predicts The Atlantic, but it will not be deadly. (The impact on the economy is entirely another matter, and your analyst wonders about the long term impacts on privacy and human rights as this epidemic has cleared a way for many surveillance technologies.)
For an entertaining and informative take on the coronavirus, watch this episode of John Oliver’s Last Week Tonight show. To prepare your business for the pandemic, visit Info-Tech’s COVID-19 Resource Center.
Want to Know More?
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
Alphabet is facing backlash from its shareholders over its approach to digital privacy, reports the Financial Times. And not for the first time. This time, however, things will need to change.
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.
Databricks, a data processing and analytics platform with a strong focus on AI and ML, has partnered with Immuta to deliver automated end-to-end data governance for AI, data science, and ML projects. | <urn:uuid:4d61945c-95b2-4204-b5f2-f965b30faa0a> | CC-MAIN-2022-40 | https://www.infotech.com/software-reviews/research/will-ai-create-the-coronavirus-vaccine | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00177.warc.gz | en | 0.940458 | 1,040 | 3.40625 | 3 |
Exploitation of Rowhammer attack just got easier.
Dubbed ‘Throwhammer,’ the newly discovered technique could allow attackers to launch Rowhammer attack on the targeted systems just by sending specially crafted packets to the vulnerable network cards over the local area network.
Known since 2012, Rowhammer is a severe issue with recent generation dynamic random access memory (DRAM) chips in which repeatedly accessing a row of memory can cause “bit flipping” in an adjacent row, allowing anyone to change the contents of computer memory.
The issue has since been exploited in a number of ways to achieve remote code execution on the vulnerable computers and servers.
However, all previously known Rowhammer attack techniques required privilege escalation on a target device, meaning attackers had to execute code on targeted machines either by luring victims to a malicious website or by tricking them into installing a malicious app.
Unfortunately, this limitation has now been eliminated, at least for some devices.
Read more: The Hacker News | <urn:uuid:9b56d59e-8eb8-4dcd-9091-bc86ef7980dd> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/new-rowhammer-attack-can-hijack-computers-remotely-over-the-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00177.warc.gz | en | 0.942318 | 202 | 2.671875 | 3 |
Over the years many standards and frameworks have been developed and adopted to address information security concerns. Information security which was once a niche domain and often an afterthought for business executives has come to occupy the centerstage.
It is the result of wholesale migration of enterprise data to computer systems which are networked with each other and with different parts of an organizations network and/or to third party networks through VPN and leased lines and to an always on internet which is accessed by a variety of endpoints from different locations.
The situation is made more challenging by the plethora of technologies and software which increase the attack surface and the ever-evolving threat landscape which has become more and more sophisticated over time. The other reason is the overwhelming dependence of present-day business on information which is not just an asset but the most important asset. So much so that the focus of all BCP and DR programs is on securing and restoring information.
Given the above scenario it is understandable why there are so many information security standards and why they are so important. It is to give organizations and nations a direction and guidance as to how to approach and best secure information and information assets and how to evaluate effectiveness. Otherwise, every organization will have to reinvent the wheel and most will not be able to do it to any degree of efficiency and whatever they do will be disputed as to its effectiveness and intent.
As mentioned above, there are many information security standards – some global, some national and some industry specific. In this article we will discuss 2 such standards, namely, ISO 27001 and NESA. Both are hugely different but have a lot of common ground. Let us discuss the 2 standards briefly before we go into a comparison and into how they should be approached at by an organization for the purpose of implementation and compliance.
ISO 27001 is the global de facto information security standard which comes from ISO or the International Organization for Standardization. The latest iteration of the standard is ISO/IEC 27001:2013 (IEC means International Electrotechnical Commission, a body which works with ISO to produce standards on electrical, electronic, and derived technologies). In fact, it is one among many standards from the family of ISO 27000 standards all of which are devoted to information security.
ISO 27001 is the main standard against which an organization can be audited and certified while the other standards in the family support ISO 27001. The chief among the other standards in the family are ISO 27000 (introductory standard which defines information security terms and terminologies), ISO 27002 (provides guidance about implementing the controls listed in Annexure A of ISO 27001), ISO 27005 (provides guidance on performing information security risk management), ISO27011 (provides guidance about ISO 27001 implementation for the telecom sector) etc.
ISO 27001 follows the same uniform format as that of other ISO standards which will be easily recognizable to anybody who is acquainted with ISO standards. ISO standards are very neat and easy to read even by laypeople. ISO 27001:2013 is particularly very well-crafted, elegant, and easy to navigate. Being an international standard, it is very broad based and does not go into specifics but provides enough wherewithal to design an ISMS which best suits one’s purpose. A copy of the standard can be purchased from the ISO website at https://www.iso.org/standard/54534.html.
The standard follows a risk-based approach to information security and consists of 7 mandatory clauses which is the core of the standard. The clauses guide an organization about how to design, implement and operate an Information Security Management System, commonly referred to as an ISMS. The different clauses are shown mapped to the stages of a PDCA cycle below to give context and for better understanding.
The clauses are followed by an annexure which lists control areas, control objectives and controls to achieve the control objectives for each area. There are 14 control areas (also called control sets) and 114 controls to achieve the control objectives identified for each area. It is not mandatory to implement all of them but it is mandatory to consider all of them and implement those which are relevant. Any exclusion is to be thoroughly justified. Most organizations end up implementing all the controls. The result of the process of evaluating (i.e., selecting and implementing a control or rejecting and excluding it from implementation) the 114 controls is a document (called documented information in ISO parlance) called the ‘Statement of Applicability’. The below list reproduces the control sets and the number of controls against each control set.
ISO 27001:2013 Control Sets
A.5 Information security policies (2 controls)
A.6 Organization of information security (7 controls)
A.7 Human resource security (6 controls)
A.8 Asset management (10 controls)
A.9 Access control (14 controls)
A.10 Cryptography (2 controls)
A.11 – Physical and environmental security (15 controls)
A.12 Operations security (14 controls)
A.13 Communications security (7 controls)
A.14 System acquisition, development, and maintenance (13 controls)
A.15 Supplier relationships (5 controls)
A.16 Information security incident management (7 controls)
A.17 Information security aspects of business continuity management (4 controls)
A.18 Compliance (8 controls)
National Electronic Security Authority (NESA)
The full form of NESA is National Electronic Security Authority. It is an UAE government body constituted in September, 2012 under the aegis of the Supreme Council of National Security and responsible for UAE’s information security strategy. It also aims to foster a culture of information security awareness and best practices among all concerned and strengthen the security of UAE’s information assets and digital infrastructure. The current name of NESA is SIA or Signals Intelligence Agency though it still goes by its old name. The body has formulated standards and documentation which is collectively called the NESA Information Pack. The chief formulations in the NESA Information Pack include the IAS (Information Assurance Standards), Critical Information Infrastructure Protection Policy (CIIP) and Cyber Risk Management Framework (CRMF).
Of the mentioned formulations, IAS is the standard set by the UAE with respect to information security for organizations. A copy of the standard can be purchased from sites like scribd.com, coursehero.com etc. NESA does not clearly define the applicability of the IAS but puts it as applicable to all government and private organizations which processes, deals with or is part of UAE’s critical information infrastructure. What it basically means is that it is applicable to all organizations which deal in and provide utility products and services to customers in the UAE or the government of UAE.
The IAS is heavily influenced by ISO 27001 and NIST standards (National Institute of Standards and Technology, an American body that develops standards). However, its approach is different than that of ISO 27001:2013. We will go through the differences at length in the difference section that follows.
NESA’s IAS is based on a threat-based model. To put it very simply, NESA identified and compiled a list of cyber security threats from industry data and categorized the threats in terms of severity and frequency of occurrence.
Then it devised controls to counter those threats and prioritized the controls corresponding to the threat level it addressed. There are 4 priority levels from P1 to P4. Implementing IAS in an organization begins with implementing the P1 controls. An organization must demonstrate that it has successfully implemented the P1 controls at least to be considered compliant.
There are 4 priority levels from P1 to P4. Implementing IAS in an organization begins with implementing the P1 controls.
An organization must demonstrate that it has successfully implemented the P1 controls at least to be considered compliant. The below tables show the control categorization. Table 1 shows the priority breakup of controls
Apart from the priority levels the controls are also grouped into 2 broad groups called families depending in the type or nature of the control. The two control families are the Management and Technical Controls families.
There are 188 controls in all out of which 60 are management controls and 128 technical controls. They are further broken down into 564 sub controls. Below tables show the breakup of controls into management and technical control families.
Generic Similarities Between ISO 27001 & NESA
- Both are information security standards.
- Both follow a PDCA (Plan Do check Act) model. The clauses of ISO 27001:2013 and the activities of IAS correspond to a PDCA cycle.
- Both mandatorily require risk assessment and risk treatment of unacceptable risks.
- Both result in the implementation and operation of a robust ISMS.
- Both focus on securing information and not IT assets or the IT function and hence require involvement of staff from all quarters.
- Both require a high degree of management involvement and commitment for success. At the same time both require a good understanding and commitment towards information security and information security practices by all employees and stakeholders.
- Both are a continuous process and need to be treated as a program and not a project with a beginning and end
11 Differences Between ISO 27001 & NESA
ISO 27001 is based on a business risk approach. It identifies and grades information and information assets based on their criticality to the business and then applies appropriate controls depending upon the level of risk associated with the information or information asset.
NESA IAS follows a threat-based approach and is geared towards mitigating those threats to the information infrastructure. A threat-based approach means that organizations do a risk assessment of the 24 threats identified by NESA to determine which are applicable to it and which it should be most concerned about and then work towards their mitigation by applying appropriate controls.
ISO 27001 gives organizations the liberty to decide on the scope of implementation. The scope can be defined in terms of location, function, process, department, product etc. Usually, organizations go for a phased implementation and begin with a particular department, location etc. Once it is successfully implemented and certified the success is replicated either across the entire organization or phase wise to different departments, locations etc.
NESA IAS does not give the flexibility of defining scope to the organization. If an organization falls under its ambit, it is the whole of the organization. This makes it more challenging to implement and maintain.
3. Risk Assessment:
ISO 27001 mandates risk assessment and management but does not mandate the basis for it. Asset based, process based, scenario based, threat based etc. all are valid. Usually, most organizations prefer to do an asset-based risk management till today despite the free hand given to them. The standard also does not prescribe the risk management methodology to be adopted though we have ISO 27005 for that purpose. An organization can use any of the numerous methodologies available like the Risk IT Framework, OCTAVE, MEHARI, ISO 27005 etc.
NESA IAS does not accept an asset-based risk assessment and mandates a threat based or process-based risk assessment. Also, the risk assessment should be ideally as per NESA’s Cyber Risk Management Framework (CRMF). Though it is not a stated requirement but it is an unstated requirement of sorts.
ISO 27001 is more broad-based and less specific and prescriptive. NESA IAS is more specific and precise in its definition and requirements.
ISO 27001 does not categorize controls on a weighted basis. All controls are equal in weight and significance and are either applicable or inapplicable in a certain context.
NESA IAS categorizes controls into management and technical controls and prioritizes them from P1 to P4 levels of priority (weighted) depending upon the level of threat it addresses.
In ISO 27001, compliance measurement is graded into major non-conformity, minor non-conformity, observations (qualified and unqualified) and opportunity for improvement.
In NESA IAS, compliance is measured in binary i.e., either compliant or non-compliant.
7. Evaluation & Certification:
ISO 27001 certification is achieved through an external audit conducted by an accredited certification body. Internal audits do not provide certification but are a part of continual improvement and a certification requirement. Certification depends on the overall performance of the organization’s ISMS. Usually, a major non-conformity or multiple minor conformities or multiple minor non-conformities pertaining to a single process prevents certification or loss of existing certification.
In NESA IAS, a formal audit is not required even though almost all organizations do it to ensure that they are compliant. Organizations can attest their compliance with the standard by performing a self-assessment and sharing the result to NESA. If NESA sees fit it can probe further by seeking additional information or by intervening itself or by authorizing someone or an entity to conduct additional test of controls. The level of involvement in the evaluation process on the part of NESA or relevant government regulatory bodies depends upon the organization and its criticality to UAE’s information infrastructure or the kind and nature of non-conformity or suspected non-conformity.
ISO 27001 is a global best practice standard with universal acceptance and is mostly voluntarily adopted. The standard does not make itself applicable to anyone.
NESA IAS is a national standard which makes itself a requirement to organizations and is enforced by the UAE government and entities and processes authorized to do so.
ISO 27001 is a much older standard which has undergone several revisions. The initial version was launched in November 2005 and the last revision was in 2017.
NESA IAS is a newer standard which came into existence in 2015.
10. Pros & Cons:
ISO 27001 is very good in developing and implementing an all-encompassing robust ISMS but may fall short in taking care of real-life threats posed by sophisticated and new cyberattacks especially zero-day attacks and APT (Advanced Persistent Threat). It is more customizable and can fit a larger basket. ISO 27001 is almost the precursor and pioneer of information security standards which has bred standards like the NESA IAS.
NESA IAS is very good at tackling real life threats posed by cyberattacks but may fail to prevent data leakage, data pilferage especially through social engineering and by insiders. It is intended to be strict and specific to the security context of the UAE. NESA IAS is a hybrid standard and combines the best from many standards.
11. Non-compliance consequences:
Since ISO 27001 is a non-governmental standard from an international body non-compliance leads to not being able to certify or re-certify. ISO does not have any means or authority to enforce compliance. ISO certifications work by virtue of their credibility and moral authority.
Since NESA IAS is a national standard with government and legal sanction non-compliance can lead to escalation and punitive measures in the form of increased scrutiny, imposition of additional requirements, having to bear the cost of additional audits, fines, lawsuits and in the worst-case scenario arrest of top executives or a complete ban to do business in or with the UAE.
Comparison of Controls of ISO 27001 & NESA IAS
ISO 27001 has only 114 controls which is much less compared to IAS.
NESA has a total of 188 controls which are further divided into 564 sub controls which is a huge volume.
It only considers specific activities and measures to address specific risks or to attain specific security objectives as controls.
It considers high level management activities as controls which is different and unique. It is somewhat controversial as to how high-level management activities can be considered as control. Usually, high level activities are responsible for producing, modifying, and fine-tuning controls. An example is M.1.1.1 (Understanding the entity and its context) which corresponds to clause 4 (Context of the Organization) of ISO 27001
Technically speaking, none of the controls specified in the Annexure A of ISO 27001 is mandatory even though practically speaking most of them are relevant to most organizations and implemented voluntarily.
NESA IAS on the other hand has a set of 35 management controls which are always applicable (unless justified by the risk assessment as not being so) and must be mandatorily implemented.
ISO 27001 does not go into the details of how to implement a control to be compliant or how to measure the success of the control implementation. In fact, it just lists the controls and does not do anything apart from identifying areas which need to be addressed through the suggested controls and control activities. Even ISO 27002 which contains the guidelines about implementing the ISO 27001 controls do not go beyond a point except for handholding and showing the way.
NESA IAS goes into great depth for each security control specifying how exactly to implement it to be compliant (sub controls), how to measure it (performance indicators), how to automate the control if and where possible (automation guideline), the type of cyberattack it protects against (relevant threats and vulnerabilities) and additional implementation help and suggestion (implementation guidance). So, it is very much unlike ISO 27001 in this regard leaving little to the imagination.
ISO 27001 or NESA IAS, Which Should be Implemented First?
With respect to the UAE implementing ISO 27001:2013 is optional but NESA is mandatory. So, the decision to implement either or both will depend upon the organizations coverage or to be more precise upon the organization’s clientele or customer base. If the organization has an international clientele or customer base extending beyond the UAE then it might have to and should ideally implement ISO 27001:2013. Otherwise NESA IAS might suffice.
If an organization must and/or decides to implement both the standards my suggestion would be to start with ISO 27001:2013 and then follow it up with NESA IAS. Since NESA IAS is inspired from ISO 27001:2013 and follows a similar approach but to a slightly different end it might be easier to implement IAS followed by ISO 27001:2013 implementation.
An IAS implementation followed by an ISO 27001 implementation can be easily achieved through a gap analysis and working to fill the gaps especially in terms of control compliance since the framework of policies, processes and procedures which make up the management system will be already there.
Challenges in Implementation & Maintenance of ISO 27001 &/or NESA
Given the expansive nature of both the standards implementing any one of them is a huge challenge with implementing IAS being a bit more challenging. The challenge with IAS arises because of its applicability to the entire organization, its huge set of 564 sub controls all of which must be implemented for every security control (unless it can be excluded through risk assessment) and the need to monitor control performance as specified by the standard (unless a custom means to measure can be justified by the context of the organization).
ISO 27001 on the other hand requires a lot of high-level activities and voluminous documentation. Performing so much in one go is almost impossible even for the most sophisticated IT corporations. For non-IT companies for whom IT is a supporting cost function it is too much of an ask. Therefore, most companies including IT companies need a qualified full time information security consultant to help them in their journey to compliance. There are many such independent consultants and consultancy firms that one can choose from.
Implementation Pitfalls and how to Avoid them
It is no wonder that such complex standards will have pitfalls for anyone trying to navigate them. It is also no wonder that many organizations (especially those without adequate expertise or domain knowledge) fall for them and founder. Some of the most common pitfalls that can be identified with respect to organizations trying to interpret and operate the standards are as below:
- Loss of focus – Given the length and breadth of the standards it is quite natural for one to get lost along the way and lose focus as to the real purpose and how it is to be achieved. The result might be that an organization remains complaint with open but undetected security issues. It might also lead to a situation where the organization loses sight of the fact that the standards are not technical standards but management system standards for better management of enterprise information. The technical aspect is a small part as to how to use the most suitable and cost-effective technology and technology configuration to achieve that.
- Control Risk – Too much focus on controls and sub controls especially in the case of IAS pose the threat of overdoing and overcompensating paving the way for control risk (risk posed or introduced by a faulty or ill designed control) which is a serious security threat.
- Too much management focus – It might also happen that too much high-level deliberation and intervention by the management might lead to unnecessary complication and paralysis on the ground and the entire exercise is reduced to meetings and documentation and maintaining and manufacturing evidences to demonstrate compliance.
- Confusion and Fatigue – It is very important that there is someone especially at the top of the ISMS team who really understands the standard and has a clear mind to be able to take accurate decisions and give unambiguous direction. This will avoid loss if time, wasted effort all of which ultimately leads to losing steam and burning out.
- Performing without realizing – It might also happen that the ISMS becomes so much a part of the culture that people practice it mechanically without applying their mind unless they are jolted by a disaster. This is a dangerous situation as complacency with respect to information security can be very costly. So, there must be continuous trainings and workshops for the relevant people and a robust and independent internal audit process to keep everybody awake and aware.
To avoid the above it helps to gather as much information as possible from all quarters and do a thorough study of the standards. It helps one to understand what is expected and how it can be done with the least pain and without running into serious trouble. Also, professional help from a qualified and experienced consultant is not just desirable but almost indispensable at least to begin with. Both standards allow a phased implementation and organizations should make use of the facility.
ISO 27001 does so in terms of scope and NESA IAS does so in terms of a phased implementation of the security controls as per priority. An organization is expected to demonstrate compliance with the implementation of P1 controls and ‘always applicable’ controls to begin with to be considered compliant and on the right track.
Both the standards are critical in today’s business context for any enterprise. In order to appreciate them better, it is important to know about further details of their controls and where these standards intersect and where they diverge.
It is also critical to appreciate their similar but slightly different contexts of operations and the purposes that they fulfil. If implemented in a contextual way, both the standards can help the enterprises benefit immensely. | <urn:uuid:321fb51c-a99a-4eff-b84d-a2d1b114a698> | CC-MAIN-2022-40 | https://www.consultantsfactory.com/article/a-critical-comparison-between-iso-27001-nesa | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00177.warc.gz | en | 0.939375 | 4,753 | 2.90625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.