text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Introduction to OSPF Virtual Link
Virtual links are used to extend Area 0 across another area. They also can be thought of as tunnels for LSAs. They are deployed in cases when areas become partitioned or an area does not border Area 0. To configure a virtual link, use the following router command:
Router(config-route)#area “transit_area_id” virtual-link “router_id_of_remote”
The area_id is the “transit area” that OSPF will tunnel through. The “transit area” cannot be a stub area of any kind. At the end of the tunnel will be another router terminating the virtual link; use the router ID of that router in the router_id field. Virtual links use RIDs, and they are another reason why we use fixed RIDs when configuring OSPF. Remember that a virtual link is actually an extension of Area 0. Think of the virtual link as the router’s new interface into Area 0. Configure all interface options on the virtual link.
OSPF virtual link configuration : Example Scenario
The Below example scenario will help illustrate where , when and how to use OSPF virtual link configuration in the OSPF domain –
As depicted in the diagram , we have 3 areas connected in series – Area 0 , Area 1 and Area 2.
The IP address , subnets and interface detail in respective area is shown in the table –
Once the OSPF is configured as per above table , we can see the OPSF neighborship formed amongst all the OSPF speaking routers R1 , R2, R3 and R4:
Now Lets see , how many routes R1 learns from its OSPF neighbor R2:
As we see, no Inter area (OIA) route is being learnt by R1 . The reason – OSPF rule book says , a non-backbone area cannot learn OSPF routes from another non backbone area , which means R1 (in Area 2) cannot learn OSPF routes of Area 1 (which again is another non-backbone area) directly.
Also , lets see whether R3 learns Area 2 routes:
And as expected , no routes learned by R3 for networks in Area 2
Now , how do we correct this situation and have OSPF to have R1 to learn routes from Area 1 and Area 0.
The solution is Virtual Link – We need to extend the Backbone (Area 0 ) via virtual link up to ABR Router R2.Note – Virtual link is a logical link using the least cost path between the ABR of the non-backbone connected area and the backbone ABR of the transit area.
The configuration needs to be performed in the transit area ABR routers ie between R2 and R3 as follows:
Now , lets see how the ospf neighbor tabe looks like after Virtual link is configured between ABRs R2 and R3:
Finally , its time to see whether R1 (Area 2) receives all the routes post configuration of virtual link between R2 and R3 (ABRs on transit Area 1).
The virtual link successfully extends Area 0 up to Area 2 (via transit area 1)
The Routers now have the complete routing table.
Another scenario where we need to implement virtual link –
In the above topology, Area 0 has been split into 2 parts and separated by Area 1.
On the left side, R1 & R2 are in Area 0 and on the right side, R3 & R4 are again in Area 0, separated by Area 1 configured on directly connected interfaces of R2 & R3.
Since, Area 0 should be single backbone area unlike the above diagram. So, in order to render it a single backbone domain, a virtual link needs to be configured between R2 & R3. Once this configuration is done (i.e. single backbone area is configured) OSPF domain will not have any challenge and the database table would be consistent. | <urn:uuid:8e215033-1588-413d-81a7-9b8c8bd3a1f6> | CC-MAIN-2022-40 | https://ipwithease.com/ospf-virtual-link/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00177.warc.gz | en | 0.902297 | 846 | 2.90625 | 3 |
Today, the success (or failure) of almost all business ventures is tied to technology, and many companies are tech businesses. As a business grows and expands, so does a company’s IT infrastructure needs.
When demand increases, businesses look at cost effective ways to scale their IT infrastructure. However, scaling traditional, on-premises, IT infrastructure is costly. You have to buy and maintain new hardware, update your software and train your staff.
Today, with cloud computing, most of these challenges have disappeared. With the ability to consume IT resources on a pay-per-use basis and scale or reduce IT infrastructure depending on current business requirements, cloud computing is the perfect model for most enterprises. That said, without careful planning and testing, a cloud migration can prove to be an extremely complex and expensive process.
What is cloud migration?
Cloud migration is a process which involves the migration of every element which includes data and the various applications, from an organization’s on-premises IT infrastructure environment to the cloud. Cloud migration is a complex and time-consuming process, and has to be carefully planned to avoid any issues at a later stage.
What is legacy infrastructure?
A legacy infrastructure can be termed as an outdated computing hardware or software that is still in use by an organization. As technology advances, it is important for organizations to update their legacy systems as they are not capable of interfacing with new systems and are often found incapable to scale quickly. For taking advantage of new digital capabilities, it is critical that enterprises transition to a new digital IT infrastructure.
Main benefits of migrating to the cloud
Today, many organizations are saddled with legacy infrastructure. For these organizations, the cloud is the fastest way to innovate and replace legacy infrastructure with cost effective and scalable computing infrastructure. From reducing costs to increasing business agility to leveraging emerging technologies such as AI, RPA and Blockchain, the cloud is a perfect platform for every organization.
By migrating from an on-premises IT environment to a cloud-based platform, organizations can save costs by moving from a capital expenditure (CapEx) model to an operational expenditure (OpEx) model. This allows the IT function to be more agile, and align closely with the business.
Types of deployment
From a classification point of view, there are broadly six types of cloud deployment approaches or models. These include:
- Public Cloud: A model that is used by many enterprises to host their applications or workloads on a public cloud provided by a third-party services provider such as Amazon, Google or Microsoft. Available on a pay-per-use basis, the public cloud model is extremely popular among many e-commerce and startup firms.
- Private Cloud: This is a model that is mostly used by companies who have restrictions due to compliance or security reasons. Many large conglomerates use the private cloud model to enable optimal sharing of IT resources among different departments or group companies.
- Hybrid Cloud: A blended model that allows enterprises to take advantage of both the public and private cloud model.
- Distributed Cloud: A recent concept, the distributed cloud model enables enterprises to run public cloud services in different locations, and monitor them using a single panel.
- Multi-Cloud: Due to the COVID-19 pandemic, many organizations are looking at reducing risks by diversifying their workloads across multiple clouds.
- Community Cloud: This is a type of cloud that is custom-built for a specific domain such as banking or insurance. As regulations are common for sectors, this allows firms in a particular sector to quickly take advantage of compliant-cloud models.
Challenges of migrating to the cloud
Most organizations do not have the required skillsets or the expertise for migrating to the cloud. Many organizations also fail to estimate the total cost of migration, the projected downtime, the time required for completing the migration and the required new skillsets for the cloud environment. A cloud environment entails a different security policy, and many enterprises fail to understand the complexity and the need for changing a security policy with respect to the cloud. There is hence a need for a separate and comprehensive cloud security policy. Organizations must also plan for a holistic identity and access management solution that ensures that only the right people have access to different applications or workloads in the cloud.
Types of cloud migrations
Organizations can choose from a variety of migration strategies for migrating to the cloud. These options include rehosting, rearchitecting, rebuilding or replacing. If organizations want to quickly take advantage of the benefits of the cloud without making any modifications to their on-premise applications, then the rehosting option is the best suited for these type of organizations. The rearchitecting option is recommended when an application has to be modified for a new cloud environment. Enterprises may also choose to completely replace or rebuild existing applications with newer applications.
Migration Tools and Platforms
All major cloud service providers provide a host of tools and solutions to help enterprises migrate to the cloud in a quick and efficient manner. There are cloud migration tools available that enable enterprises to gather adequate information from their on-premise data centers with respect to parameters such as configuration and usage. This helps enterprises plan their migration strategies more effectively. Cloud service providers also provide cloud migration readiness tools which help enterprises assess their readiness for migration to the cloud with respect to factors such as people, process, platform, operations, security and the business. For a cloud migration strategy to be successful, it is critical that the complete data is migrated without any hindrances. There are cloud migration tools available that help enterprises replicate their on-premise environments into a staging area using an automated lift and shift solution, without causing any downtime. This is especially important in cases where the source and target databases are not the same, which is true for most cloud-environments.
Measuring Pre-Migration Performance
Many companies make the mistake of not preparing a comprehensive baseline or inventory of their applications, which makes it difficult to prioritize the order of applications that have to be moved to the cloud, and prepare an effective plan for migration. Performance of any application must also be measured before migration is undertaken. This helps enterprises in comparing the pre and post migration performance, and address any discovered performance gaps.
Cloud migration steps
1. Plan and prepare for migration
A cloud migration is a complex process. Hence, it is vital to prepare adequately before an actual migration process. While the preparation depends on the type of organization and the number of applications to be migrated, there are some basic steps that every enterprise must follow.
Firstly, enterprises must be clear on the business objective of moving to the cloud. While the cloud offers a good number of benefits, organizations must first understand the benefits of moving applications to the cloud before they take a single step.
As most cloud migrations are complex, a project manager to oversee the cloud migration process is highly recommended. This is critical as a cloud migration project may involve many stakeholders and departments. A project manager can monitor the cloud migration process in real-time, and quickly resolve issues as they arise.
When you move an application from an on-premise data center to the cloud, there are two ways you can migrate your application—a shallow cloud integration or a deep cloud integration.
In a shallow cloud integration, enterprises simply move the on-premises application to the cloud, and make no modifications in the cloud’s servers to run the application. This model is also popularly known as the lift and shift model, as it entails just lifting the application and shifting it to the cloud. In the case of a deep cloud integration, enterprises modify their applications before the migration to enable them to take advantage of the capabilities of the cloud. For example, an application may use a cloud’s unique features such as auto-scaling or dynamic load balancing.
2. Choose your cloud environment
Before you start your cloud migration, you have to decide the type of cloud model to be adopted. You must also choose between a single or multi-cloud model.
A single cloud model simply means that an organization has chosen a single cloud provider to host all its applications. Depending on their needs, an organization may choose between a private or a public cloud model. Many organizations use the single cloud model to host their application workloads such as ERP, e-mail or CRM.
In a multi-cloud environment, an enterprise may use a choice of different cloud service providers for different workloads. This is usually done to avoid vendor-lock in and achieve the best results. All cloud environments are not created using the same parameters. Depending on each department’s unique needs, the cloud computing model or service provider can be chosen. This also helps in creating a competitive environment, and reduces dependence on a single service provider.
This can be further classified into the following (depending on the type of cloud service chosen):
- IaaS (Infrastructure as a Service)
- PaaS (Platform as a Service)
- SaaS (Software as a Service)
3. Migrate applications and data & review
If a cloud migration process is detailed and planned carefully, the actual migration can be a smooth and seamless process. This depends on the size of the databases and the number of applications. If there is extremely less data to migrate, then enterprises can just copy their data over a simple Internet connection. However, this approach will not work for larger workloads. To avoid these issues, enterprises can decide to compress their data before they send it. Alternatively, they can ship their data in a physical form to the cloud service provider to save costs related to bandwidth.
It is also extremely important to ensure adequate security during the actual migration process. Any location or device where data is temporarily stored must be secure. Enterprises can make use of cloud migration tools that are given by cloud service providers to ensure a seamless migration.
Once an enterprise has successfully managed to migrate data and applications to the cloud, there are a few more things that are important. The most important is resource optimization. Make sure that your organization utilizes the full capabilities and potential of the cloud for using resources cost effectively.
Moving your data and applications to the cloud can prove to be a great move and can give organizations the much needed differentiator through the cloud’s ability to cut costs, scaling quickly and enabling new business models.
The time for the cloud migration process depends completely on the size of the organization and the diversity and complexity of its IT infrastructure. In this article, we have covered some of the key steps that an organization should have in place, when they decide to migrate to the cloud. To learn more about how to use cloud migration to amplify & simplify your infrastructure, get familiar with these solutions and services. If you have any questions about how you can effectively use the power of the cloud to transform your business, or how to improve your cloud performance and bring down costs, contact us today to help you out with your performance and security needs. | <urn:uuid:83b733f7-b631-4029-9fe6-756aa71f668e> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/cloud-migration-3-basic-steps-for-a-successful-migration-process/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00177.warc.gz | en | 0.936293 | 2,240 | 2.546875 | 3 |
Cloud Computing – Business Fundamentals
Service providers build DCs to house the people and equipment needed to service their clients. As seen in Figure 1, the typical DC contains servers, routers, switches, firewalls, SANs, and other equipment as seen in Figure 1. It is not unusual for one DC to be tied (networked) to other DCs to help with the workload. This distribution of hardware allows the DCs to move workloads from DC to DC in an effort to provide the best possible service for the client while controlling the cost of the hardware for the service provider.
Figure 1: Data Center Resources Include Servers, SANs, Firewalls, Router, and Switches
Distributed computing is pervasive in cloud computing and describes a method where multiple systems communicate and work together to achieve the desired result (as with grid computing). There is often overlap when discussing distributed computing and grid computing; however, the difference is that distributed computing is commonly used when describing disjointed networked systems:
- Distributed computing – networked computers sharing dissimilar workloads
- Grid computing – networked computers sharing the same workload
In order to support a distributed computing environment, a distributed file system (DFS) must exist. One of the best examples of a distributed file system is DNS – domain name system. DNS has an inverted hierarchical structure, a tree structure, starting from the root (.) And terminating with name resolution. Structures of this nature are examples of databases; in this case, a distributed database, as DNS is spread across many networks and servers. In keeping with the cloud computing style of rendering resources as services, DNS uses a database to resolve names; therefore, DNS would use the service database as a service (DBaaS) to contain its repository of domain names – this will be covered in greater detail in future lessons, but to give a preview, some examples of DBaaS include Amazon RDS, MySQL, HBase, Hadoop, and Cassandra.
Common Applications of Virtualization – Hypervisors
There are many examples of virtualized computer systems. For instance, there are virtual PCs, softphones (virtual phones on a computer), virtual network switches and virtual servers. The software used to create these virtual machines (VM) is called a hypervisor. Some of the more common hypervisors include:
- Oracle Virtual Box (Apple, Linux, Microsoft)
- Microsoft Virtual PC 2007 (Microsoft)
- Parallels (Apple)
- Microsoft Hyper–V (Microsoft)
- VMware (Apple, Linux, Microsoft)
- Citrix (Apple, Linux, Microsoft)
- Linux VServer (Linux)
The hypervisor, also known as the VM Monitor (VMM), is software used to:
- Manage the concurrent applications and guest OSs on a host
- Support the running of multiple OSs
- Manage the host system resources
- Isolate or partition VMs
There are two types of hypervisors, categorized according to their placement within the hardware/software system architecture.
Type 1 (native) – the hypervisor runs directly on the hardware, as in Figure 2.
Figure 2: Type 1 Hypervisor
Type 2 (hosted) – the hypervisor runs on top of an existing OS, as in Figure 3.
Figure 3: Type 2 Hypervisor
The hypervisor is a key component in creating virtualized environments. All virtualization software implements some form of hypervisor software, either open-source or proprietary.
A Type 1 hypervisor runs natively on the host computer. In this example, VMWare ESXi is hosting a 2008R2 Windows server. Notice in Figure 4 the VMWare management screen and the Windows 2008R2 desktop.
Figure 4: VMWare ESXi Hosting Windows 2008R2 Server
Figure 5 shows a Type 2 hypervisor on an Apple OSX host operating system running the hypervisor Virtual Box. The guest operating systems being displayed are Windows 7, Ubuntu Linux, and Chrome.
Figure 5: Apple OSX with Virtual Box
Virtualization provides the bridge between how information technology services are delivered in the current data center environment to how those same services and applications are delivered in a cloud environment. Virtualization involves the sharing of physical computer components and includes a logical abstraction of the physical assets of a computer system. When discussing cloud computing, virtualization, grid computing, and utility computing are essential components. The essential characteristics of cloud computing can provide organizations with a cost-effective solution to deliver IT services.
It is safe to conclude that cloud computing is a radical departure from traditional service-oriented computing, where fixed resources are contracted. Cloud computing converts all points of service, business, software, and hardware into logical abstractions, and they are allowed to be reconfigured dynamically. | <urn:uuid:f1748f46-5d5c-4760-b71c-b3383db58977> | CC-MAIN-2022-40 | https://electricala2z.com/cloud-computing/hypervisor-types-examples-hypervisor-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00177.warc.gz | en | 0.894607 | 997 | 3.375 | 3 |
A Brief Introduction to Barcodes
Barcodes are used everywhere: trains, planes, passports, post offices... you name it. And just as numerous as their applications are the systems themselves. Everybody's seen a UPC barcode like this one:
But what about one like this on a package from UPS?
This is a MaxiCode matrix, and though it looks quite different from the UPC barcode, it turns out that these systems use many common techniques for storing and reading data. Both consist of black or white "modules" which serve different purposes depending on their location. Some modules are used to help with orientation when scanning the barcode, some act as data storage, and some provide error correction in case the modules are obscured. (I won't address how the error correction algorithms work, but those who are interested can read more here .)
The diagram above shows the orientation patterns used in UPC barcodes to designate the start, middle, and end of the barcode, as well as how the data-storage modules are encoded. The last digit of a UPC barcode is not used to store data, serving instead as a checksum to verify that no errors were made when printing or reading the barcode.
Though they look quite different, MaxiCode matrices employ the same
I want to stop here for a moment and just appreciate the intricacy of this system. The tinkerer in me can't help but wonder, How could someone possibly figure all this out? For better or for worse, there is no need to figure it out since MaxiCode is public domain and Wikipedia has all the answers. But wouldn't that be an interesting puzzle?
If you answered no, here's a QR code for your troubles:
For those of you still reading, I'd like to introduce another barcode system, and the guest of honor in today's adventure: Snapcode.
Snapcode is a proprietary 2D barcode system that can trigger a variety of actions when scanned in the Snapchat app. Snapcodes can add a friend, unlock image filters, follow a link, and more. Unlike MaxiCode, however, there is no public documentation about how the Snapcode system works! Thus the scene is set. Driven merely by curiosity, I set out to answer the following questions:
1. What data do Snapcodes encode?
2. How do Snapcodes encode data?
3. What actions can be triggered when these codes are scanned?
Chapter 1: Our Adventure Begins
The Tale of the Treasure
The first question I had to answer was, Is it even possible? Figuring out how Snapcodes encode data is impossible without first knowing what
data they encode. In the hopes of uncovering a reliable correlation between the data underlying a Snapcode and the Snapcode itself, I generated the following URL Snapcodes that would navigate to the same address when scanned. If the Snapcodes store the URL directly, then they should look very similar.
To aid in the process of ingesting these images, I wrote a simple Python script that I will reference periodically throughout this tale . The "scan" method checks each position that could contain a dot and stores it as a 1 (present) or 0
(empty) in a 2D array. This allowed me to efficiently ingest, process, and visualize the data, like in the image below. This image was generated by putting a black dot where both Snapcodes had a dot, a white dot if neither Snapcode had a dot, and red if one had a dot and the other did not:
This first trial showed quite a few red dots, suggesting that there may not be any connection between the Snapcode and the URL it represents. Hoping for a clearer correlation, I tried another type of Snapcode which adds a user as a friend when scanned. Repeating the experiment with the add-friend Snapcodes of two users with similar names ("aaaac123456789" and "aaaad123456789") showed a more promising result.
Generating the same type of secondary Snapcode gave the following matrix:
The top and bottom show quite a bit of red, but take a look at the regions just to the left and right of the center. There is almost no red! From this, I drew two conclusions. First, the add-friend Snapcodes store, potentially among other data, some form of the username. Second, the dots to the left and right of the center are the ones used to encode this data, since this is where the highest correlation occurs.
There is still a long way to go, but we have taken an important first step. Fundamentally, we know that there is in fact something to find within these dots, and on top of that, the fact that we know what is being stored may help us down the line.
What's Below Deck?
In addition to the Snapcodes, another area to explore was of course the Snapchat app. Just from playing around with the app, I knew that it had the ability to generate and read these codes, so perhaps a closer look would uncover something useful to my pursuit. Using the Android Debug Bridge , I pulled the Android package file (APK) from a phone with Snapchat installed. An APK is a ZIP file that contains many different types of information, but of greatest interest to me was the compiled Java code. From the many tools available to decompile the code and reverse engineer the app, I chose to use JADX .
After some time poking around the decompiled Java code, I found that the app referenced several methods from a Java Native Interface (JNI) library used to produce the Snapcode images. This library was packaged along with the compiled Java files and provided the following functions that can be called from Java code:
String nativeGenerateWithVersion(long j, int i, byte bArr);
nativeGenerateDotsOnlyWithVersion(long j, int i, byte bArr);
These methods took (among other arguments) a byte array containing the underlying data, and returned an SVG image of the Snapcode. If I could call these methods with data that I controlled, perhaps I could determine what exactly each of the dots means.
Chapter 2: The Treasure Map
As any treasure-hunter knows, it's important to be
lazy resourceful. Snapchat was kind enough to provide all the code I needed to construct a map: the Snapcode library, the logic to load it, and the method signatures to create the Snapcode images. A little paring down and I had my very own Android app that could create Snapcodes with any data I wanted. The question was, What data?
Some helpful error messages told me that each Snapcode stored 16 bytes of data, presumably mapping to 16 groupings of eight dots. To light these byte-groups up one at a time, I passed the function an array with one byte set to -1 (which Java represents as b11111111 using two's complement) and the rest set to 0. The result was a sequence of Snapcodes with one of these groupings lit up at a time.
Notice that some groups of dots are always present, some light up only once throughout the set, and some turn off and on sporadically. It seems plausible that these regions are respectively acting as orientation patterns, data storage, and error correction, just as we saw in the UPC and MaxiCode standards. To more clearly show the byte groupings, the orientation patterns and error correction dots have been removed:
A different set of byte arrays can be used to determine the order of the dots within each of these groupings: setting one bit in each byte to 1 and the rest to 0. This can be achieved with a series of byte arrays with each byte in the array being set to the same power of 2. For example, the array is filled with all 1s (b00000001) to identify the lowest bit in each byte, all 2s (b00000010) for the second bit, all 4s (b00000100) for the third bit, and so on.
Pieced together correctly, these two sets of data provide a perfect map
between a Snapcode and the bit-string of data it represented. From the first set of Snapcodes, we identified the grouping of bits that made up each byte as well as the order of the bytes. From the second, we learned the ordering of the bits within each byte. The dot corresponding to bit X of byte Y, then, would be the dot that is present in both Snapcode Y of the first set (groupings) and the Snapcode X of the second set (orderings).
For my script, this map took the form of a list of coordinates. The bit-string was constructed by checking the corresponding positions in the Snapcode grid one by one, adding a value of 1 to the bit-string if there was a dot in that position and a 0 if not.
DATA_ORDER = [(16,5), (17,6), (17,5), (16,6), (18,5), (18,6), (0,7), (1,8), (1,7), (0,8), (2,7), (2,8), (16,3), (17,4), (17,3), (16,4), (18,3),(18,4),(0,5),(1,6), (0,6), (1,5), (2,6), (2,5), (4,16), (5,17), (5,16), (4,17), (4,18), (5,18), (4,0), (5,1), (4,1), (5,0), (4,2), (5,2), (16,16), (17,16), (16,17), (17,17), (16,18), (18,16), (16,0), (17,1), (16,1), (17,2), (16,2), (18,2), (14,16), (15,17), (14,17), (15,18), (14,18), (15,16), (14,0), (15,1), (14,1), (15,2), (14,2), (15,0), (0,3), (1,4), (1,3), (0,4), (2,3), (2,4), (12,16), (13,17), (12,17), (13,18), (12,18), (13,16), (12,0), (13,1), (12,1), (13,2), (12,2), (13,0), (8,16), (9,17), (8,17), (9,18), (8,18), (9,16), (8,0), (9,1), (8,1), (9,2), (8,2), (9,0), (3,13), (4,14), (3,14), (3,15), (4,15), (5,15), (3,3), (4,3), (3,4), (4,4), (3,5), (5,3), (15,13), (14,14), (15,14), (13,15), (14,15), (15,15), (13,3), (14,4), (15,3), (14,3), (15,4), (15,5), (10,16), (11,17), (10,17), (11,18), (10,18), (11,16), (10,0), (11,1), (10,1), (11,2), (10,2), (11,0), (0,2), (1,2)]
Reordering the dot matrix (a 2D array of 1s and 0s) into a bit-string using this data structure looked something like this:
return [dots[row][col] for (row,col) in DATA_ORDER]
It wasn't exactly pretty, but the pieces were coming together. At this point, I knew the add-friend Snapcodes somehow stored the username, and I knew how to reorder the dots into a series of bits. The final transformation, how those bits were being decoded into characters, was all that remained.
Chapter 3: Lost at Sea
The methodology from here was a bit fuzzy. I created an account with the desired username, fed the account's Snapcode into my script, and out popped a string of 1s and 0s for me to... do something with. As in the previous phase, the choice of input was the crux of the matter. I began with usernames that seemed interesting on their own, like ones
consisting of a single character repeated many times. The first two usernames,
"zzzzzzzzzzzzz4m", had the respective bit-string representations:
Staring at 1s and 0s, hoping to find something, was a particular kind of fun. You can't help but see patterns in the data, but it can be difficult to know whether they are just in your imagination or if you are really on to something. If you'd like, take a few minutes and see what you can find before reading on. What I took away from this first experiment was the following:
The only patterns that I could identify appeared in the last 88 bits of the string. Both strings had a sequence of 24 bits (bits 41 to 64, in bold) that repeated three times followed by a sequence of 16 bits (underlined). 14 of these last 16 bits were the same between the two bit-strings. I also noticed that a similar pattern could be found in the usernames:
Finding patterns in the bit-string was exciting on its own, but finding matching patterns in the two representations of the data suggested the presence of a clear path forward in converting the bits to characters. However, try as I might to find a connection, these patterns led nowhere. Every one of my (sometimes hair-brained) theories on how these bits may have been converted to letters proved fruitless.
Where Are We?
Having hit a dead end, I changed my tack and tried to learn more about what constituted a valid Snapchat username. According to Snapchat's documentation , usernames must consist of 3-15 characters chosen from an alphabet of 39: lowercase letters, digits, and the three symbols ".", "-", and "_". Furthermore, they must begin with a letter, end with a letter or number, and contain at most one non-alphanumeric character.
A little math shows that representing a single character from this 39-letter alphabet would
require six bits, since 2^5 (32) < 39 < 2^6 (64). 15 characters, then, would require 90 bits. However, as far as I could tell, these 15 characters
were being encoded in the 88 bits where I noticed the patterns. No other similarities showed up in the two bit-strings. How else could they be encoded, if not separately using six bits per character?
As some background research had turned up, one of the encoding schemes used in the QR code standard solves a similar problem. Using an alphabet of 45 characters, QR's alphanumeric encoding scheme treats pairs of characters as two-digit base-45 numbers and encodes the resulting value into binary. The result is two characters per 11 bits, rather than one per six bits! Hypothesizing that the creators of the Snapcode system may have done something similar, I tried each of the possible permutations for decoding sets of X bits into N characters using an alphabet of size 39, but none of them created strings that showed any pattern like the underlying username.
This was just one of many rabbit holes I went down. I learned a great deal about other barcode encoding schemes and came up with many ways the engineers may have optimized the usage of those 88 bits, but with regards to decoding the Snapcode I was dead in the water.
Chapter 4: 'X' Marks the Spot
With a strategy as fuzzy as "staring at bits," it should be no surprise that the final breakthrough came when I found a way to better present the data on which I was relying. Snapchat provides a mechanism for generating new Snapcodes and deactivating old ones, in case an old Snapcode is leaked and the user is receiving unwanted friend requests. Using this tool, I generated five Snapcodes for each of the accounts and combined these into a single string using the following rules: each character of this string was assigned a value of "1" if each of the five Snapcodes had a dot in the corresponding position, "0" if none of them had a dot in that position, or "x" if some had a dot and some didn't.
Reducing the noise in the data with this new representation made the answer I had been looking for as clear as day. The modified bit-strings looked like this:
These three extra bits (underlined) were separated from the rest of the data I had been looking at, bringing the total to 91. This meant the process of encoding a username could be done one character at a time. I felt quite silly having spent so much time trying to fit the username into fewer bits rather than looking for more bits that may be used, but I imagine the path of a treasure hunt is seldom a straight one.
Digging for Gold
Because the values of these 91 bits were identical in each of the five Snapcodes, it seemed safe to assume that they somehow contained the username. I continued from here using the Snapcodes of two more users: "abcdefghmnopggg" and "bcdefghnopqhhh". The first seven characters are sequential and offset by one between the two names, a pattern I was hoping would highlight which bits were being incremented for each character. The respective bit-strings were:
Once again, some interesting patterns showed up. Both strings could be split up into segments whose binary values were either the same between the two usernames or off by exactly one:
010 ... 01011 001 1 001100 0 110 00110 01111 001 1 010000 ...
011 ... 01100 001 0 001101 0 111 00111 10000 001 1 010001 ...
Presumably, the segments that were identical between the two strings were the higher bits of the encoded character, whose values we may not expect to change, and the off-by-one segments were the lower bits, whose values would be incremented when representing sequential characters.
I also noticed that the lengths of these segments followed the sequence 5-3-1-6-1-3-5. A strange pattern, it seemed at first, but it eventually dawned on me that these segments could be paired up to create chunks of six bits, each of which could represent a single character. I began enumerating the possible combinations of these segments, eventually coming across the following set of six-bit chunks:
[001|010] [0|01011] [00110|1] [001|110] [0|01111] ...
[001|011] [0|01100] [00111|0] [001|111] [0|10000] ...
Converted to decimal, these values show the same characteristics seen in the pair of usernames:
10, 11, 12, 13, 14, 15, 16 ...
11, 12, 13, 14, 15, 16, 17 ...
The second unknown, how these values were being converted into characters, fell quite nicely into place from here. Assuming 10 mapped to 'a', 11 to 'b', and so on, it felt safe to assume that 0 through 9 mapped to '0' through '9', and 36 through 38 represented the three symbols. Verifying these assumptions and identifying the exact value assigned to each character was achieved by testing them on a range of other usernames.
final detail fell into place when trying to decode usernames that did not use all 15 available characters. The end of a username was simply marked by any value
greater than 38, after which the remaining bits were ignored by the decoding process. QR codes use a similar mechanism, designed to avoid large empty spaces in the barcode that make it unsightly and harder to scan.
In Python, the process of reordering the bit-string into six-bit chunks took the form of lists of integers whose value indicated the position of a bit in the bit-string. For example, the binary value of the first character was determined by taking bits 46-48 of the bit-string and appending bits 33-35:
A dictionary converted the decimal values of these chunks to characters:
With that, I was at last able to trace the username data through each stage of the decoding process: dots, bits, six-bit chunks, and finally characters.
Chapter 5: A Smooth Sail Home
Tying up Loose Ends
Revisiting my third research question, What actions can be triggered when these codes are scanned?, was simple compared to what I had just been through. Snapchat publicly documented several other types of Snapcodes that were easy to interact with, like URL Snapcodes and content Snapcodes to unlock in-app content. Others I had to read about, like ones that are used to pair Snapchat's "Spectacles" devices to your phone .
I found the above Snapcode on a mysterious page of Snapchat's website, which contained only the title "Snapchat Update." Scanning it in the app did nothing on my phone, but presumably it would update the Snapchat app if it was out of date. I spent a good deal of time trying to reverse engineer the app to determine how this Snapcode is handled, and whether there were any other undocumented functions a Snapcode may invoke, but I was unable to find anything.
One final loose end that a curious reader may have identified was the mechanism for deactivating old Snapcodes mentioned in the previous chapter. Having several Snapcodes for each of the test users, I compared the values of the non-username bits both across accounts (e.g. the first Snapcode for each account) and within accounts (i.e. the sequence of Snapcodes for a single account). No discernible patterns showed up, which led me to hypothesize that the Snapcodes were differentiated by some sort of random key in the non-username bits. In this scenario, each account would be associated with one "active" key at a time, and the Snapchat app would only perform the add-friend function if the Snapcode with that user's active key was scanned.
A Last Golden Nugget
I decided to see what else I could find in those types of Snapcode I could easily create, but neither one showed any patterns between the underlying data and the resulting Snapcode. As seen earlier, URL Snapcodes change drastically even when creating two that redirect to the same URL, and the content Snapcodes show no correlation between the barcode and the content pack metadata like the author, name, etc.
Exploring Snapchat's website eventually led me to the following URL:
On this page, there is a Snapcode labeled "Gold Fish Lens" that presumably once unlocked a Snapchat lens, though this no longer works when scanning it in the app. However, the HTTP parameter "uuid=c4bf0e0ec8384a06b22f67edcc02d1c3" jumped out as a possible piece of data that was being stored in this type of Snapcode. Sure enough, converting the dots to a bit-string (just as we did with the username) and then converting this bit-string to a hexadecimal string resulted in this exact UUID!
I found a similar piece of data when creating a URL Snapcode . The initial response includes a "scannableId" in a similar format to the UUID. This value is then used in a subsequent request to pull up the image of the resulting Snapcode, leading me to believe it serves the same general purpose.
Based on these findings, I hypothesized the following work flow: Whenever a new lens filter or sticker pack is created or a new URL Snapcode is requested, a UUID is generated and stored in a database along with any associated information like the content pack name or URL. When a Snapcode of one of these types is scanned in the app, it prompts a web request including this UUID to query the database and determine what action to perform.
There was nothing more I could (legally) try to definitively confirm this hypothesis, so I guess I'll just have to wait for Snapchat to tell me if I got it right.
Reflecting on this exercise, I came up with a few personal takeaways, as well as some thoughts for organizations who have their own proprietary barcode system or perhaps are considering implementing one.
The first implication for barcode systems is that if they are used to store recognizable data, they can feasibly be cracked. Had I not known what data was stored in the add-friend Snapcodes, this entire project would have been dead in the water. It may be impossible to keep the process of barcode-to-bit-string transformation entirely secret if you need that functionality in client-accessible software (like the Snapchat mobile app), but this alone will not be enough to crack the barcode system if you don't know the underlying data.
This makes Snapchat's UUID system a great way to avoid leaking potentially sensitive information and significantly decrease the risk of the barcodes being reverse engineered in the first place. If the bits are translated directly to a hexadecimal UUID then perhaps there's a chance of guessing how to decode the UUID as I did, but without access to the database that value is meaningless.
Inversely, storing any sensitive information in a barcode is a very bad idea, for obvious reasons. Even Snapchat's inclusion of username data is potentially dangerous. Recall the solution they came up with in case an
old Snapcode is leaked and the user is receiving unwanted friend requests; a malicious user can extract the username and find the updated Snapcode at this URL: https://www.snapchat.com/add/USERNAME. Snapchat does have other controls to prevent unwanted friend requests, but the ability to find a user's active Snapcode effectively nullifies the current mitigation. (I did disclose this to Snapchat, and they have acknowledged and accepted the issue.)
As for my personal takeaways, the first is that sometimes it pays to have a wide range of skills, even if your experience is minimal. The skills involved in this challenge included reverse engineering, scripting, Android internals and development, web development, and logic. I am far from an expert in any of these domains, but in a challenge where the solution is not just a matter of one's depth of technical knowledge, any angle of attack you can find is important. This could be as simple as knowing that a tool exists to do something you need.
Finally, I'd like to think I learned a lesson about making assumptions. When the information surrounding a problem is incomplete, some assumptions are required to make any progress at all, but it's important to revisit these every so often to make sure they aren't sending you down a rabbit hole. A breadth-first approach, exploring several possible solutions together as opposed to one at a time in depth, may have lessened the pain of realizing that days of work were useless.
I am sure you learned more than you ever wanted to know about Snapcodes, but I thank you for joining me for this adventure! | <urn:uuid:a65cac3d-6075-423d-912a-47ae799fa589> | CC-MAIN-2022-40 | https://labs.ioactive.com/2021/12/cracking-snapcode.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00177.warc.gz | en | 0.946063 | 6,148 | 3.109375 | 3 |
After an injury, a splint is used as part of the first aid to stabilize the affected area. Immediately after an accident, the most crucial goal is to immobilize the affected area as much as possible. Since it's impossible to understand what's happening under the affected area's skin, it's vital to avoid moving until medical professionals assess the situation and give direction.
While waiting for health providers to assess the situation, it might help to support the injured area with a soft splint to ensure the victim is comfortable, yet safe from excess moving. Besides providing support and ensuring comfort, soft splints also reduce pain in the injured area.
When Does One Require A Soft Splint?
It's not impossible to tell when a victim requires a soft splint. For instance, when someone is injured and is in a lot of pain, they are more likely to require a splint to reduce pain. Also, if the victim has broken bones, strains, sprains, dislocated bones, or tendon ruptures, they might require a soft splint for support and comfort. Generally, soft splints are required by anyone who has experienced injuries on their limbs.
What's A Soft Splint?
A soft splint is a device that supports and protects against a broken bone or injury. This device ensures that the injured part is secured and stabilized until immediate medical attention can be completed. Unlike rigid splints, soft fabrics such as towels, folded blankets, folded triangular bandages, or pillows are used as soft splints.
How To Use A Soft Splint
Like any other injury device, it's essential to know how to use a soft splint for optimum performance. For more information on how to use a soft splint, here's a step-by-step guide:
If an injury happens within one's wrist or ankle, the first step is to assess their circulation. Ensure the victim understands what you're doing to help them calm down. Then, hold their fingers or toes and squeeze them gently. While doing this, ask the victim to let you know if they experienced any sensation. This action ensures there's still circulation before applying a soft splint to the injured area. Also, if the doctors ask whether the splint cut off the circulation or if there was circulation before applying the spleen, you and the victim can give a correct answer.
If the injury is on the ankle, don't remove the footwear, especially if the victim is wearing high-top boots, shoes, or sneakers. This is because such footwear acts as a comprehensive bandage. Also, the soft splint acts as additional support with their footwear on. However, if the footwear isn't supportive, you can remove it.
After determining if the victim has circulation in the affected area, the next step is to get the soft splint ready. Mold the splint to fit the affected area. For instance, if the victim sustained injuries on their ankle, mold the splint to fit their ankle and leg. And if they are injured on their wrist, mold it to fit the lower arm. Afterward, slide the splint under the affected area. When done correctly, the limb should rest comfortably on the splint.
After the splint is comfortably under the affected area, wrap both the splint and the limb with bandages. Ensure to wrap it tight (though not too tight to cause discomfort) to secure the soft splint on the affected limb. The splint should act as a crutch for additional support.
After securing the soft splint with bandages, tie the bandages into a knot. Ensure the knot is pretty tight to prevent it from unraveling before the victim seeks the professional help they require.
When performing this activity, keep on talking to the victim. Keep checking if they are comfortable with every step of the procedure. When applying the splint and wrapping it with bandages, ask the victim if they can still feel the circulation. Remember, talking to the victim will help them remain calm and make it easier for you to perform the necessary first aid.
Following an injury, the victim needs to get first aid as soon as possible. While there are various first aid procedures for different injuries, some first aid devices are suitable for physical injuries such as sprained ankles or broken bones. Among such devices are soft splints.
A soft splint is essential in ensuring that the affected area is comfortable and supported and the victim is relieved of unbearable pain while waiting to get the medical help they need. Nevertheless, you should understand how to use them to avoid causing complications. Keep this guide handy at all times in case any emergency situation arises. | <urn:uuid:a096f4f8-b26f-4b50-950d-54fc60b9c52c> | CC-MAIN-2022-40 | https://www.ciobulletin.com/healthcare/what-is-a-soft-splint-and-how-to-use-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00177.warc.gz | en | 0.943748 | 1,021 | 2.96875 | 3 |
Need a version to run on a USB flashdrive? Try Pen Drive Linux, or search the websites of larger distributions for do-it-yourself instructions. Want to coax every ounce of speed from your hardware? Try Gentoo, in which every program is compiled for the hardware it’s one. Need to maintain different versions of the same software? Then try rPath Linux or any of the other distributions based on the Conary packaging system.
Chances are, a web search on your requirements plus “linux” will reveal at least one distribution with a ready-made or easily customizable solution.
Conclusion: Auxiliary Apps
When you’re ready, you may be able to download a Live CD for your preliminary tests. A Live CD is a version of a distribution that boots from a CD, allowing you to test the software without making any permanent changes to your system. Just remember that even the latest DVD drive is slow compared to a hard drive, so you can’t judge performance from a Live CD.
These days, the differences between distributions are narrowing. Most distributions that you test will have much the same choice of software: KDE and GNOME for desktops, Mozilla Firefox for web browsing, and OpenOffice.org for office productivity. By definition, a distribution is a collection of software made by other projects and, no matter how specialized, its unique or selective features are only a small percentage of the total package.
However, that small percentage can often greatly affect the user experience. A distribution designed for older hardware, for example, might use the less familiar Ice Window Manager for a desktop, or AbiWord for word processing.
More importantly, just as with any software, the policies and procedures and structures behind a distribution can be as important to your adoption as the contents. Do your due diligence, and you’ll have a better chance of finding a distribution that fits your needs.
This article first appeared on Datamation in March 2007. | <urn:uuid:20596428-522f-4840-b4f1-e641b1a9b58e> | CC-MAIN-2022-40 | https://cioupdate.com/choosing-a-linux-distribution-for-your-business-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00377.warc.gz | en | 0.922022 | 403 | 2.578125 | 3 |
Résumé du cours
Every day, millions of email messages are exchanged among people within and between organizations. Email has a ubiquitous presence in the lives of many, and it's likely that email technologies will continue to evolve with the changing needs of workplaces. After all, email communication has not been replaced, or its growth slowed, as many predicted with the rise of social media and the widespread adoption of mobile technologies. Many organizations have implemented mail management systems that combine the back-end power of Microsoft Exchange Server and the front-end intuitive user interface of Microsoft Outlook.
In this course, students will customize command sets, configure mail accounts, set global options, perform advanced searches, apply filters to intercept mail and control spam, create rules to automate mail management tasks, work with calendars and contacts, manage tasks, preserve data with archives and data files, as well as share and delegate access to their Outlook items. In short, students will work with a wide range of features and options and, in so doing, understand why Outlook is a leading personal management system.
Note: Most Office 365 users perform the majority of their daily tasks using the desktop version of the Office software, so that is the focus of this training. The course material will also include helpful notes throughout the material to alert students to cases where the online version of the application may function differently from the primary, desktop version.
This course builds upon the foundational knowledge presented in the Microsoft Outlook for Office 365™ (Desktop or Online): Part 1 course and will help students customize a communication system well-suited to their work style. This course covers the Microsoft Office Specialist Program exam objectives to help students prepare for the Outlook Associate (Office 365 and Office 2019): Exam MO-400 certification exam.
A qui s'adresse cette formation
This course is intended for those with a basic understanding of Microsoft Outlook and who need to know how to use its advanced features to manage their email communications, calendar events, contact information, search functions, and other communication tasks.
To ensure success, students should have end-user skills with any current version of Windows, including being able to start and close applications, navigate basic file structures, manage files and folders, and access websites using a web browser. Additionally, it will benefit students to have basic Outlook skills.
In this course, students will use Outlook’s advanced features to customize and manage their email communications, including: using advanced features to organize emails; managing calendar settings and options; managing contact information; scheduling tasks; and managing Outlook archives and data file settings. After completing this course, students will be able to:
- Insert objects in messages, and modify properties and global options.
- Organize, search, and manage messages.
- Protect your mailbox and manage its size.
- Use rules and Quick Steps to automate message management.
- Work with advanced calendar settings.
- Import and forward contacts.
- Assign delegate permissions and share Outlook items with others.
- Archive and back up Outlook items using data files.
Outline: Microsoft Outlook for Office 365 (Desktop or Online): Part 2 (91140)
Module 1: Modifying Message Properties and Customizing Outlook
- Insert Hyperlinks and Symbols
- Modify Message Properties
- Add Email Accounts to Outlook
- Customize Outlook Options
Module 2: Organizing, Searching, and Managing Messages
- Group and Sort Messages
- Filter and Manage Messages
- Search Outlook Items
Module 3: Managing Your Mailbox
- Manage Junk Email Options
- Manage Your Mailbox Size
Module 4: Automating Message Management
- Use Automatic Replies
- Use Rules to Organize Messages
- Create and Use Quick Steps
Module 5: Working with Calendar Settings
- Set Advanced Calendar Options
- Create and Manage Additional Calendars
- Manage Meeting Responses
Module 6: Managing Contacts
- Import and Export Contacts
- Use Electronic Business Cards]
- Forward Contacts
Module 7: Sharing Outlook Items
- Assign and Manage Tasks
- Share Your Calendar
- Share Your Contacts
Module 8: Managing Outlook Data Files
- Use Archiving to Manage Mailbox Size
- Work with Outlook Data Files
Mapping Course Content to Outlook Associate (Office 365 and Office 2019): Exam MO-400
Configuring Email Message Security Settings
Inserting Objects into Messages
Microsoft Outlook Common Keyboard Shortcuts | <urn:uuid:83ff2fee-84a4-4032-b4c4-7d2ddb2219ec> | CC-MAIN-2022-40 | https://www.fastlanetraining.ca/fr/course/microsoft-91140 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00377.warc.gz | en | 0.849502 | 962 | 2.65625 | 3 |
AI systems for problem solving, including Cognitive Computing systems, require a base collection of knowledge or corpus. The corpus is a digital representation of all that is known about a particular domain, such as all the works of Shakespeare, or all of the defining characteristics of disorders that are codified in the Diagnostic and Statistical Manual of the American Psychiatric Association. This knowledge must be represented in a consistent form to allow the system to use it to draw inferences and make decisions, and to be able to update the corpus when appropriate.
The data required for corpora in some domains, such as medical diagnostics, insurance claim codes, and regulatory filings, are already available in text form from government and professional association sources. Packaging this data for use by AI/cognitive systems—with or without additional metadata—is a natural extension to the conventional content publishing model and is in progress for several domains.
Common knowledge—the data that helps us interpret natural language in context—has utility across industries and is generally more difficult to codify. The Cyc knowledge base, which contains over 630,000 concepts with 38,000 types of relationships, has been in development for decades and is now commercially available. Open source projects like WordNet, which catalogs words, synsets, and senses in English, can give application developers a jumpstart on building robust solutions with natural language capabilities.
Representative Vendors and Projects: Cognitive Scale, CyCorp, and WordNet. | <urn:uuid:499915e1-12e4-47cf-b0da-f483ee3f234b> | CC-MAIN-2022-40 | https://aragonresearch.com/glossary-knowledge-libraries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00377.warc.gz | en | 0.926959 | 302 | 3.046875 | 3 |
We heard a lot of people talking and running after centralized cloud storage frameworks. Centralized cloud storage frameworks are usually not well-received due to their various drawbacks, such as centralized operational, hacking incidents, server failures, and frequent data leaks. Developers are getting cautious about these failures and looking for ways to store data safely, stable, and cost-efficiently.
And guess what? We found a way to save it by switching to a decentralized storage protocol.
What is decentralized storage protocol?
Decentralized storage protocols are designed using blockchain technology. It provides an incentive system to enable people to store data on multiple network nodes. These protocols introduce a tokenomics model to realize distributed and piecewise data storage on multiple network nodes. The two major categories of decentralized storage protocols are:
1. One category storage as arithmetic power and users mine by providing hard disk space. The consensus mechanism is benchmarked against Bitcoin-type projects, and the representative project is called Siacoin.
2. The other is storage as a service, enabling users to store data as arithmetic power.
In simple words,
One of the key components of a decentralized web is its storage system. The data is distributed in different chunks among various peer-to-peer network nodes. Several distributed file-sharing systems such as Napster and Bit torrent have been tried and tested before. However, these were not designed to be built-in infrastructure. The advantages of storing data in a decentralized environment are similar to those of a decentralized web.
Why get decentralized storage protocols within the organization?
It is recommended to take decentralized storage protocols for some of the valid reasons mentioned below:
1. Security –
Each copy of data is encrypted and stored in its own unique and secure way. Only the private key can decrypt it and view it in digital currency. Data is stored in a decentralized manner, which means that only the parts of the data are affected by hackers.
2. Reliability –
Blockchain is a digital ledger that enables transactions to be distributed without intermediaries. Its decentralized nature allows storage protocols to be used for storing data. A cloud storage protocol is a way to store data that uses decentralized storage and decentralization characteristics.
In the decentralized storage protocol, all the blocks are verified. Workers should verify any deposited data, that is to prevent data tampering.
A distributed storage system achieves load balancing, preventing traffic from overwhelming a single location. Since users store multiple copies of their data in different areas, they do not suffer data loss if their machines fail or stop working.
Since there will be so many decentralized nodes, the market will be more open and competitive. The price of data storage can vary widely depending on the storage platform used and the complexity of the storage requirements.
For the next thing to do, you must look up some popular decentralized storage systems within your organization.
Known some popular decentralized storage systems
1. InterPlanetary File System (IPFS)
IPFS is a distributed file system that can replace HTTP. It is similar to HTTP in that it sends and receives files over HTTP. IPFS is a vital component of Skeps’ decentralized architecture. It helps secure the transfer of files between nodes.
The Storj network is a distributed object store that enables users to store and distribute data globally. In short, it is a private, decentralized, affordable, and secure cloud object storage for developers. It enables any computer running its software to rent unused hard drive space to individuals looking to store data. Storj is an subsitute to cloud storage platforms like those offered by Google and Amazon. However, Storj depends on software and the network of computers to manage its data storage. Significant components of Storj are Satellites, Storage Nodes, and Uplinks.
OrbitDB is a distributed, serverless, and peer-to-to-peer database. It uses IPFS to store and distribute its data and IFPS Pubsub to sync the database with peers automatically. The data in OrbitDB should be stored in various ways, such as sharded or kept in its original state. It supports multiple data models, such as keyvalue, log, feed, counter, and documents. OrbitDB databases are consistent, achieved with Conflict-Free Replicated Data Type (CRDTs) for conflict-free database, making it an excellent choice for decentralized Apps (dApps), offline-first web applications, and blockchain applications.
At Skeps, they build a distributed database called ChainWolfDB, which OrbitDB inspires. Its goal is to provide a distributed database with consistent and securely stored data. It uses encryption to transfer data and supports transactions that help keep all the nodes involved in the data safe.
GUN is a real-time, decentralized, offline-first graph database engine that is lightweight and powerful. The GUN is a distributed database system that can sync data between connected nodes. Its goal is to provide consistent and real-time updates.
The flexible data storage model is used for storing data in various configurations. It can store relational data (MSSQL or MySQL), tree-structured document orientation (MongoDB), or a graph.
In the Web 3.0 era, where data ownership and security are more critical, reliability, safety, and scalability will profoundly impact companies.
The curtain has just turned on the Web 3.0 revolution, and decentralized storage still has a long way to go in infrastructure, talent, infrastructure, and funding.
Although decentralized storage is still in its early stages, the national policies will also influence the development of the industry. Despite the various national policies and market trends, centralized storage platforms are still the mainstream method for storage.
We are on the verge of rapid technological advancements and are still waiting for decentralized storage protocols to exist.
For more such content download our latest whitepapers on storage here. | <urn:uuid:724e9ecc-684c-450f-8fa5-a71d851b6333> | CC-MAIN-2022-40 | https://www.fintechdemand.com/news/it-infra-news/storage-news/decentralized-storage-new-cool-in-the-cloud-storage-market/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00377.warc.gz | en | 0.914756 | 1,224 | 3 | 3 |
The technique of data masking provides an organization with a lot of flexibility with how its data can be utilized. Data masking basically switches the authentic, sensitive business data in documents with dummy values that make little sense to allow those with restricted access to still view the remaining contents of the document.
As Mordor Intelligence highlights, organizations exceedingly prefer this method, and the market for data masking is predicted to grow at a 13.69% CAGR between 2021 and 2026.
Let’s understand data masking in more detail.
What is Data Masking?
Data masking is a method that obscures, scrambles, or completely jumbles up certain portions of a dataset such that they cannot be comprehended or deciphered by the readers. This technique is adopted predominantly to restrain the breach of sensitive information within an organization.
Essentially, data masking helps keep the business data functional and circulation-friendly by obscuring selective information that the administration deems available only to authorized users. This technique is being increasingly deployed in training, testing, and demonstration drives at organizations. Today, data masking is so secure that it cannot be reverse-engineered – ensuring the absolute safety of restricted information.
Reasons to Practice Data Masking
Data masking provides a great way to restrict data access; it also has certain other benefits, as mentioned below:
- Data masking is a great barrier that checks for data exfiltration, snooping, and information compromise.
- It is a great option to ensure data security when Clouds are involved.
- In case a breach does happen, the leaked/compromised data would be unusable to the attacker due to the dummy values contained in it by masking.
- It gives organizations control over data sharing and exposure.
- It works better than data deletion – a process in which data can be recovered – because data masking obscures the information without reversal.
Types of Data Masking
Data masking can be accomplished using various techniques. Let’s see what some of these are.
Static Data Masking
The process of static data masking involves creating a clone of the existing database to create a sanitized version that can be shared. It works by first creating a copy of the database and then deleting the unnecessary information that need not be shared. The masking process is performed, and this newly sanitized database is then sent to the destination.
Deterministic Data Masking
This technique involves determining the same types of datasets and using the same dummy value across all locations for this identified dataset. For example, for masking the name “Jill Johnson,” the technique will employ using the dummy name “Jane Doe” everywhere “Jill Johnson” appears. This method isn’t very secure, though.
On-the-Fly Data Masking
This method of data masking essentially deals with smaller pockets of data. It is an “as and when required” approach. Data that must be circulated is masked right then and there before being broadcast to the target location. This is especially helpful where there is no time to create database backups first – like in software deployments.
Dynamic Data Masking
This technique is the same as on-the-fly masking; however, while on-the-fly stores a database copy on a production base, dynamic data masking does not. The information is constantly streamed across systems.
Techniques of Data Masking
Different types of data masking apply different techniques. Some of them are as follows:
- Data Encryption. The most popular type of data masking is to encrypt it so that the actual data is replaced by meaningless values. The encrypted data can’t be deciphered unless the user has the key.
- Data Scrambling. Scrambling does exactly what one might think – it completely jumbles existing characters in such a manner that they make no sense. However, it is less secure and limiting.
- Nulling Out. Wherever sensitive data is not intended to be seen by unauthorized people, the fields are populated with “Null” or “Missing,” thus making the data useless unless it is for simulation.
- Value Variance. In situations where some data is required to run tests successfully, the actual values are replaced by the maximum or minimum differences of the same so that operations can still continue with the data.
- Data Substitution. This is a spin-off of the value variance technique, where the data in question is replaced by another random value of the same nature so that operations can be carried out with the new value, which is a dummy.
- Data Shuffling. Like the word suggests, this method of masking shuffles the similar data amongst itself, changing the order and replacing it with a randomized sequence of the same values, which can’t be misused.
- Pseudonymization. This is the latest technique prescribed by the GDPR, where citizens’ personal identifiers are replaced by pseudonyms or dummy values that can’t be linked to the owner of the data. This protects user privacy.
Best Practices for Data Masking
Data masking can be used most effectively when certain best practices are involved. By ensuring the following checks are in place, you can make the most of data masking.
- It is important to determine the scope of masking in terms of authorization of use, values to be masked, permitted applications, storage, and transfer of the data.
- Integrity between departments needs to be maintained when employing different masking tools. Synchronization that allows effective communication must exist.
- Data masking makes the target data safe; however, the algorithm that masks the data also needs to be secure for this method to be truly effective. Ensure that data masking algorithms are safe, protected, and secure.
Protection of data can be performed in many ways. Data masking allows for the safe and uncompromised usage of business data in testing, training, and demo programs which help improve consumer experience and business bottomline. It also helps keep prying eyes at bay.
Data intelligence is emerging as a necessity in the business world. Copious amounts... | <urn:uuid:d234aeeb-60a1-447f-ad67-1aa80d9143af> | CC-MAIN-2022-40 | https://secuvy.ai/blog/data-masking-101-everything-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00377.warc.gz | en | 0.909195 | 1,293 | 3.03125 | 3 |
If SATA is short for Serial ATA, and SAN is the acronym for a Storage Area Network, does it follow that a SATA Network is a ‘SATAN’? Certainly high-end SCSI-based storage hardware vendors can be forgiven for labeling SATA as the work of the devil.
As recently as January 2002, however, with a second roll-out forecast delayed, it looked like SATA would end up as yet another technology promise without support or substance. Since then, though, the adoption of SATA standards has been described by some as unprecedented. SATA has rapidly established itself as a serious rival to parallel technology in enterprise storage environments.
“More and more we are seeing that ATA is taking over from SCSI,” says John McArthur, Group Vice President Storage Research at IDC. “We are seeing greater usage of slower ATA and SATA drives for various uses due to its low cost and high capacity. SCSI on the other hand tends to be faster and more reliable, but more expensive.”
So what exactly is this disruptive technology? Serial ATA, short for Serial Advanced Technology Attachment, is a standard for connecting disk drives and systems. SATA is based on serial signaling technology, whereas traditional ATA (also called Parallel ATA or PATA) uses parallel signaling.
SATA has several practical advantages over parallel signaling. SATA cables are thinner and more flexible than the ribbon cables required for conventional PATA drives, which results in easier installation and a reduction in space required for SATA-based hardware. SATA cables can also be considerably longer than PATA ribbon cables, another factor that enables greater leeway in the physical arrangement of the storage network.
Because there are fewer conductors in serial signaling technology, crosstalk and electromagnetic interference (EMI) are less liable to cause problems. The signal voltage is much lower as well (250 mV for SATA as compared with 5 V for PATA). Together these technical advantages ensure far greater efficiency in integration for SATA over PATA.
Page 2: A Rapid Ride to Acceptance
A Rapid Ride to Acceptance
The reasons for SATA’s rapid progress are simple — SATA technology is improving in performance and capacity while continuing to benefit from a reduced cost. This has created an enticing sweet spot of price/performance that is particularly attractive in delivering storage solutions for non-mission-critical data.
SATA applications are wide ranging as well. The technology can be used, for example, as a near-term repository for data that will eventually be archived to tape. Additionally, SATA can be used in snapshot repositories, in remote volume mirroring destinations, or for electronic vaulting. SATA is well suited to tiered storage uses as well, especially for applications that require storage with varying performance, availability, and cost characteristics.
SATA is also gaining acceptance in low cost, entry-level SANs targeted to small and medium-sized businesses (SMBs), especially when coupled with iSCSI. According to some analysts, SATA storage solutions could cut costs by as much as 60%.
Even without this cost benefit, from a technology viewpoint there are compelling drivers for utilizing SATA. As well as bandwidth and flexibility demands putting increasing pressure on parallel systems, there are inherent problems with traditional SCSI and ATA, including incompatible cables and connectors, different software, simple physical space problems with bulky SCSI cables, and restrictions on the lengths of cables due to the need for eliminating the possibility of signal errors. In solving these issues, SATA improves integration efficiency and creates long-term scalability and cost benefits that are vital for forward progress.
Where SCSI has excelled to date has been in speed and reliability. ATA and SATA drives typically operate at speeds of 5,000 to 10,000 rpm (revolutions per minute), whereas SCSI generally operates around the 15,000 rpm mark. And MTBF (Mean Time Between Failure) for ATA/SATA desktop drives has usually been pegged in the range of a few hundred thousand hours, while SCSI drives generally are rated at well over a million hours. The tradeoff, of course, is the higher cost.
Perhaps the most interesting development, though, is the melding together of the best of both worlds through the integration of SATA and SCSI via Serial Attached SCSI (SAS). This next-generation evolution of SCSI leverages proven technology while enabling integration with SATA and all of its inherent benefits.
One of the crucial features is the enablement of one of more SAS host controllers to connect to a large number of drives. Using an expander, a controller can connect to other host connections and expanders. This architecture enables massive storage network topologies, as well as the balance of lower-cost/lower-performance SATA drives where they are appropriate with higher-cost/more reliable SCSI devices in areas where they are needed.
According to IDC’s McArthur, “SATA and SAS is a marriage made in heaven.”
This story originally appeared on Enterprise IT Planet. | <urn:uuid:79a791d6-2390-4382-9595-777200231803> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/scsi-failing-to-drive-out-satan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00377.warc.gz | en | 0.951277 | 1,035 | 2.78125 | 3 |
Composable infrastructure is an approach to data center architecture that decouples high-performance applications and workloads from the underlying hardware. A composable data center infrastructure creates pools for data center resources based on where they will run most effectively at the moment.
Storage, computing, and networking resources traditionally ran on the servers that had been specifically configured for them in data centers. Multiple architecture revisions have been made to data centers, including converged and hyper-converged infrastructures, intended to make IT resources run more efficiently and quickly. Composable infrastructure has a similar purpose.
Benefits of Composable Infrastructure
Composable infrastructure, also referred to as composability, is a software-defined method of disaggregation. Disaggregation in a data center abstracts resources from hardware so that they’re connected to each other through the network fabric instead. Software-defined infrastructures allow resources and applications to be managed through programs rather than by the hardware on which they sit.
A designated API or software platform, called a composer, prepares hardware so that IT teams don’t have to manually configure and provision every aspect of the architecture. The composer distributes resources into pools depending on the need at the moment.
That responsive redistribution—which also enables flexibility and scalability—is one of the reasons composability is becoming popular: data centers need applications to move to a new environment if they aren’t running well in their existing one. This might mean that, when one server goes down, the storage pool is immediately moved to another server.
The software-defined pools created through composability are able to run on all hardware within the infrastructure because the composer software manages how workloads run on the available servers. Composable infrastructure is designed so that applications and workloads can run on bare-metal servers.
Composable infrastructure, then, makes all network, storage, and compute resources available over the data center network. Instead of only being accessible through one computer, server, or silo, they are virtualized. Composability reduces silos within IT infrastructures.
Composable systems typically run on premises, which make them cheaper than standard public clouds. They’re advantageous for businesses that still need legacy applications to run well on the existing servers in their office.
Difference Between Composable, Hyper-Converged, and Converged Infrastructures
These three modern data center infrastructures have some similar goals—bringing resources together while also making them more flexible—but achieve those with varying levels of success.
Converged infrastructure, designed as a more flexible alternative to traditional IT infrastructure, was intended to bring all necessary IT resources into one system. That includes computing, networking, and storage components. It doesn’t entirely do away with silos, but it’s convenient for businesses that want pre-configured or pre-installed systems.
In converged infrastructure, the vendor’s configuration recommendations typically recommend where resources are placed within the data center. The entire converged system is often purchased as both hardware and software—servers and all resources come together. This means that all aspects of the infrastructure work well, but it also means risking vendor lock-in.
Hyper-converged infrastructure is similarly intended to combine compute, network, and storage components within a data center, but the infrastructure is virtualized. A hyper-converged system is abstracted from the underlying servers, which requires hypervisors.
Hyper-converged infrastructure typically uses x86 servers. It requires some configuration, and to scale out the infrastructure, administrators must install more nodes within it. Hyper-converged infrastructure is intended to be highly scalable.
Composable infrastructure differs from both in its disaggregation. Instead of being bound to hardware or a hypervisor, composable computing, networking, and storage resources connect to the network fabric rather than one server. The composer software moves resources to pools dependent on the need at the moment.
Converged infrastructure helps data centers manage their IT resources in one infrequently changing system. Hyper-converged infrastructure is a virtualization solution for businesses that need to scale their computing resources. Composability is an ideal solution for data centers that have fast-changing application and workload needs. | <urn:uuid:5878b500-30fb-4cd5-8b24-30670dbdbce1> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/networking/composable-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00377.warc.gz | en | 0.935016 | 876 | 2.828125 | 3 |
Distributed antenna systems are a network of antennas distributed throughout an area to enhance network performance. This antenna network is robust compared to a single, large antenna that covers a wide area. Read on for more information about the importance of distributed antenna systems and how it allows contact with forces responding to buildings in the process of being damaged.
What is a Distributed Antenna System?
A distributed antenna system (DAS) is designed for indoor or outdoor use and provides wireless coverage in subways, hospitals, hotels, businesses, roadway tunnels, etc. In addition, these wireless services are generally provided by a DAS consisting of cellular, Wi-Fi, fire, police, or emergency services.
In addition, distributed antenna systems assist responders in keeping constant radio contact with others during emergencies. These systems are critical for emergency response situations because clear communication affects rescue efforts.
What Are the Types of Distribution Systems?
Furthermore, there are four distribution systems: active (it uses fiber optic or ethernet cable), passive, hybrid, and digital.
- A passive distributed antenna system uses passive components like splitters, tapers, and coaxial cables to distribute signals inside a building. In addition, this system manages the wireless signals through “leaky” feeder cables that perform as antennas within the building.
- An active DAS system transforms the analog RF signal into a digital signal for distribution. After the conversion, the digital signal is transmitted through fiber optic or Ethernet cables to the antenna systems.
- Another type of distribution system is a hybrid DAS system. This system uses coaxial and fiber optic cables to spread the signal around a building.
- Lastly, the digital DAS communicates directly with the DAS master unit and around the remote units without any transformation to an analog RF interface.
Pros and Cons of Using Distributed Antenna Systems
Moreover, some common advantages of using distributed antenna systems include an enhanced defined coverage, fewer coverage holes, same coverage using less power, and the individual antennas do not require to be high as a single antenna to receive the equivalent coverage.
In contrast, the common disadvantages of using distributed antenna systems are a potential visual effect in some applications due to a high number of antennas and higher cost due to necessary infrastructure.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for over 25 years in the Mid-Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Wireless Access Point installations
- Public Safety DAS – Emergency Call Stations
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
- UL2050 Certifications and installations for Secure Spaces
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry. | <urn:uuid:fc4751f4-ed05-42e0-9884-8fd903853aca> | CC-MAIN-2022-40 | https://www.fiberplusinc.com/services-offered/the-importance-of-distributed-antenna-systems-for-public-safety/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00377.warc.gz | en | 0.911088 | 651 | 2.859375 | 3 |
What is DID?
DID stands for Direct Inward Dialing. DID is a feature offered by telephone companies for use with their customers PBX (private branch exchange) system. By using DID, a company can offer its customers individual phone numbers for each person or workstation within the company
What is VoIP?
VoIP stands for Voice over Internet Protocol. VoIP technology converts voice into digital signal which then travels over Internet Protocol (IP) networks, such as Internet.
How is VoIP used with DID?
For many years to acquire a phone number for an individual phone the only medium was an operator physically allocating phone lines to the PBX system. In today’s world all of that can be achieved with the internal networks and VoIP. It is fast, cheap and relevantly easy.
For a company to use VoIP technology along with DID all they need to do is:-
- Purchase a PBX software package that best meets your needs.
- Find a company that provides the DID numbers
- Download and install the software on a server..
- Test your system and go live 🙂
Local Phone Numbers for your Business
We sell Local Phone Numbers AKA DID, you can get DID from anywhere and we will send call to your PBX. | <urn:uuid:8c024656-9541-4028-b4e1-55dae28d5fe2> | CC-MAIN-2022-40 | https://www.didforsale.com/what-is-did-and-how-it-works-with-voip | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00377.warc.gz | en | 0.929756 | 264 | 2.671875 | 3 |
1 - Getting Started with Word
Topic A: Navigate in Microsoft WordTopic B: Create and Save Word DocumentsTopic C: Manage Your WorkspaceTopic D: Edit DocumentsTopic E: Preview and Print DocumentsTopic F: Customize the Word Environment
2 - Formatting Text and Paragraphs
Topic A: Apply Character FormattingTopic B: Control Paragraph LayoutTopic C: Align Text Using TabsTopic D: Display Text in Bulleted or Numbered ListsTopic E: Apply Borders and Shading
3 - Working More Efficiently
Topic A: Make Repetitive EditsTopic B: Apply Repetitive FormattingTopic C: Use Styles to Streamline Repetitive Formatting Tasks
4 - Managing Lists
Topic A: Sort a ListTopic B: Format a List
5 - Adding Tables
Topic A: Insert a TableTopic B: Modify a TableTopic C: Format a TableTopic D: Convert Text to a Table
6 - Inserting Graphic Objects
Topic A: Insert Symbols and Special CharactersTopic B: Add Images to a Document
7 - Controlling Page Appearance
Topic A: Apply a Page Border and ColorTopic B: Add Headers and FootersTopic C: Control Page LayoutTopic D: Add a Watermark
8 - Preparing to Publish a Document
Topic A: Check Spelling, Grammar, and ReadabilityTopic B: Use Research ToolsTopic C: Check AccessibilityTopic D: Save a Document to Other Formats
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is intended for students who want to learn basic Word 2016 skills, such as creating, editing, and formatting documents; inserting simple tables and creating lists; and employing a variety of techniques for improving the appearance and accuracy of document content.
To ensure your success in this course, you should have end-user skills with any current version of Windows®, including being able to start programs, switch between programs, locate saved files, close programs, and access websites using a web browser. | <urn:uuid:378a0017-7ef5-48b1-9559-78938e666629> | CC-MAIN-2022-40 | https://charleston.newhorizons.com/training-and-certifications/course-outline/id/1002452564/c/word-2016-part-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00577.warc.gz | en | 0.759278 | 447 | 3.109375 | 3 |
More organizations are taking data security very seriously as they realize they may not be as safe as they thought. Small and large organizations implement perimeter security systems, like firewalls, intrusion detection systems, data loss prevention and more. They try to lock down devices and the network. They put complex passwords on important systems and try to control viruses and malware.
Unfortunately with all this security, data breaches still occur. Attackers steal payment information, account credentials, intellectual property and trade secrets. Sometimes an organization appears to have everything covered, but the human element gets in the way.
Breaches fall into two categories: accidental and malicious.
Accidental data breaches are caused by misplaced laptops, external hard drives, or USB flash drives. Employees or contractors may share sensitive data through email because they selected the wrong email address. Another problem is leaving sensitive data on retired desktops, severs or mobile devices.
Malicious or deliberate data breaches are either done by hackers looking to mine sensitive employee and customer information or by an employee who wants to make some money or otherwise hurt the organization. Sometimes it’s difficult to determine the person’s motivation, but the end state is still the same. Your organization is compromised.
The answer is to control the information when you create it by encrypting it and assigning policies that determine who can access the information and what they can do with it. You need to encrypt files and apply a persistent security policy on each document to prevent unauthorized users from accessing them. You can decide who can view, edit and print the document. If a hacker gets hold of the file, it is useless to them. If a malicious employee gives the information away, you can kill its access. T he same is true for accidental breaches. If an employee sends a document outside the organization, the external party will not have authorization to use it. If they somehow do, you can remove their access immediately, no matter where the document resides.
Perimeter security is important, but it’s not enough to stop the bad guys. Implementing a is a file-based security solution is your best bet.
Photo credit Frank Hebbert | <urn:uuid:c141fb2a-4d4b-43c3-98e8-035fbf987251> | CC-MAIN-2022-40 | https://en.fasoo.com/blog/are-you-asleep-at-the-switch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00577.warc.gz | en | 0.929259 | 435 | 2.65625 | 3 |
Find Replace Tool
One Tool Example
Find Replace has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer.
Use Find Replace to find a string in one column of a dataset and look up and replace it with the specified value from another dataset. You can also use Find Replace to append columns to a row.
The Find Replace tool has 3 anchors:
- Input anchors:
- F input anchor: This input is the initial input table ("F" for "Find"). This is the table that is updated in the tool's results.
- R input anchor: This input is the lookup table ("R" for "Replace"). This is the table that contains data used to replace data in (or append data to) the initial input.
- Output anchor: The output anchor displays the results of the Find Replace tool.
Configure the Tool
The Find Replace tool configuration is comprised of 2 sections: Find and Replace.
- Choose the radio button that best describes the part of the field that contains the value to find:
- Beginning of Field: Searches for the instance of the field value at the beginning of the field. The entire field does not have to only contain what is being searched for.
- Any Part of Field: Searches for the instance of the field value in any part of the field. The entire field does not have to only contain what is being searched for.
- Entire Field: Searches for the instance of the field value contained within the entire field. The instance MUST be there in its entirety to be replaced with the new value.
- Find Within Field: Select the field in the table with data to be replaced (F input anchor) by data in the reference table (R input anchor).
- Find Value: Select the field from the reference table (R input anchor) that contains the same values as the Find within Field field in the original table (F input anchor).
- Select optional search conditions:
- Case Insensitive Find: This option will ignore the case in the search.
- Match Whole Word Only: Strings are only matched if there are leading and trailing spaces. For strings at the beginning or end of a cell, there must be a space at the other end.
You can choose to replace or append data in the table using these radio buttons:
- Replace Found Text With Value:
- Choose the field from the reference table (R input anchor) to use to update the original table (F input anchor) Find Within Field.
- Optionally select Replace Multiple Found Items (Find Any Part of Field only). This should only be used if you selected Any Part of Field from the first radio button.
- Append Field(s) to Record:
- Choose this option to append a column populated with the reference table (R input anchor) data whenever the selected Find Value field data is found within the selected Find Within Field.
- Select the fields to append. | <urn:uuid:121747d0-25b4-48a1-b3dc-33b955a0f03a> | CC-MAIN-2022-40 | https://help.alteryx.com/20221/designer/find-replace-tool | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00577.warc.gz | en | 0.769207 | 628 | 3.109375 | 3 |
QR Code (Quick Response Code)
A QR Code (Quick Response Code) is a two-dimensional barcode that, when read by a machine, facilitates the rapid transfer of information.
Matrix, or 2D, barcodes of this kind generally contain information such as the location of where they are printed, a description of the item on which they are printed or tracking information. Today many QR codes are used with mobile devices and point to a website or open an application. They are also used to assist other processes such as multi-factor authentication (MFA) as well as enrollment or registration in all kinds of services.
QR Codes were invented in Japan in 1994 to quickly and efficiently track auto components along the manufacturing life cycle of a new car. Since then they’ve grown in their applications as well as in their usage. In the US in 2020, according to Statista, an estimated 11 million households scanned a QR code. That is up from its 2018 national estimate of 9.76 million scans.
"Due to heightened vigilance around contagious diseases, many restaurants now post QR codes around their dining areas so customers can use their mobile phones to pull up an online menu. This reduces high-touch surfaces such as printed menus." | <urn:uuid:dfa047a1-4531-4121-adf7-03b558bf52ae> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/qr-code-quick-response-code | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00577.warc.gz | en | 0.960359 | 255 | 3.375 | 3 |
What is GDPR?
The EU General Data Protection Regulation (GDPR), (Regulation (EU) 2016/679), is an important data privacy regulation coming to the European Union (EU). GDPR was approved in April 2016 and replaces Data Protection Directive 95/46/EC. The change takes effect on May 25, 2018. GDPR is a regulation, not a directive.
The regulation defines data subjects, data controllers, data processors. “Data subjects” are natural persons residing within the EU. A data controller is an organization, like Facebook, that collects data from EU residents. The data controller determines the purposes, conditions, and methods of the data processing. A data processor, like Amazon Cloud Services, is a company that processes personal data on behalf of the controller. GDPR applies if the data controller, data processor, or the person from which data collected reside with the EU.
Any information related to a natural person or ‘Data Subject’, that can be used to directly or indirectly identify the person. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.
Who does the GDPR affect?
GDPR applies to all companies processing and holding the personal data of a natural person residing in the European Union.
- GDPR applies to organizations located within the EU
- GDPR also applies to organizations outside of the EU if they offer goods or services to, or monitor the behavior of, EU people
According to the official EUGDPR website:
“Under GDPR organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater). This is the maximum fine that can be imposed for the most serious infringements e.g.not having sufficient customer consent to process data or violating the core of Privacy by Design concepts. There is a tiered approach to fines e.g. a company can be fined 2% for not having their records in order (article 28), not notifying the supervising authority and data subject about a breach or not conducting an impact assessment. It is important to note that these rules apply to both controllers and processors — meaning ‘clouds’ will not be exempt from GDPR enforcement.” | <urn:uuid:b9c54e8c-911a-40a3-a74e-070819462e60> | CC-MAIN-2022-40 | https://www.askcybersecurity.com/what-is-gdpr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00577.warc.gz | en | 0.901951 | 560 | 3.03125 | 3 |
A research team from the École Polytechnique Fédérale de Lausanne (EPFL) has developed a 3D-printed robotic eel. It’s called Envirobot, and has been designed to swim through contaminated water to detect pollutants.
It’s not unusual for scientists and engineers to look to the natural world for inspiration. There are few better examples of that than Swiss university EPFL‘s eel-inspired Envirobot. From a distance, you might think that Lake Geneva’s newest resident has escaped from a nearby zoo. But it’s actually a sophisticated robot capable of gathering a range of data from the water’s surface.
Water pollution detected
The ambitious amphibious project has been backed by the Swiss NanoTera Program. The end goal is to develop a swimming robot that autonomously detects the source of water pollution. Envirobot’s component parts are modular, designed to be switched in and out depending on the task at hand. It can be equipped with chemical, physical and biological sensors.
EPFL’s robot propels itself through the water like an eel and measures nearly 1.5 meters long. Its fluid movement has been designed to help it trace a path through water that won’t disturb the bed of a river or lake, or other aquatic life. Its sensors can measure a range of data points and send back information to a linked computer in real-time.
Auke Ijspeert, Head of EPFL’s Biorobotics Laboratory, pointed out the benefits of using an eel-inspired robot, rather than conventional measuring systems that rely on a web of fixed sensors.
“There are many advantages to using swimming robots,” he said. “They can take measurements and send us data in real-time – much faster than if we had measurement stations set up around the lake. And compared with conventional propeller-driven underwater robots, they are less likely to get stuck in algae or branches as they move around. What’s more, they produce less of a wake, so they don’t disperse pollutants as much.”
“The Envirobot can follow a preprogrammed path, and has also the potential to make its own decisions and independently track down the source of pollution. That could be by, for example, steadily swimming in the direction of increasing toxicity.”
Pushing the boundaries
In testing so far, Envirobot has been generating maps of water conductivity and temperature in a small section of Lake Geneva.
But the team from EPFL has been working alongside the University of Lausanne, the University of Applied Sciences and Arts of Western Switzerland and the Swiss Federal Institute of Aquatic Science and Technology to explore more potential applications for the technology.
While some of Envirobot’s modules contain conductivity and temperature sensors, others highlight a much more innovative approach to marine conservation research. Inside are tiny chambers that fill up with water as the robot moves along. Contained within the chambers are miniaturized biological sensors that are home to bacteria, small crustaceans or fish cells.
The sensors observe the reaction that these sensitive organisms have to the water, giving an indication of its toxicity and the type of pollutants present.
“We developed bacteria that generate light when exposed to very low concentrations of mercury. We can detect those changes using luminometers and then transmit the data in the form of electrical signals,” says Jan Roelof van der Meer, project coordinator and head of the Department of Fundamental Microbiology at the University of Lausanne.
Another biosensor relies on Daphnia, also known as water fleas. These tiny crustaceans are always on the move, but the extent of that movement is impacted by water toxicity. “By comparing changes in their movement relative to the control group, we can get an idea of how toxic the water is,” said van der Meer. | <urn:uuid:f959d65e-f6d1-4a00-af8e-5777f8b896e9> | CC-MAIN-2022-40 | https://internetofbusiness.com/epfl-3d-printed-robotic-eel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00577.warc.gz | en | 0.929818 | 836 | 3.625 | 4 |
Researchers at the University of Tokyo have developed a mesh-based substance known as nanomesh that could be used to make wearable health monitors.
The research could lead to the development of non-invasive ‘e-skin’ devices that can monitor a person’s health continuously over a long period.
The breathable electronic skin can be safely worn for a week without causing any skin complaints. The researchers claim that the device is so light and thin that users forget they even have it on.
Wearable devices to monitor health conditions have come some way in recent years, with many devices coming onto the market that use highly elastic materials attached directly onto the skin for more sensitive, precise measurements. But many of these come with problems in that while ultrathin films and rubber sheets used in these devices stick well to the skin, they aren’t that breathable, posing risks for long-term use, such as preventing sweating and blocking airflow around the skin, causing irritation and inflammation.
Nanomesh long-term use
“We learned that devices that can be worn for a week or longer for continuous monitoring were needed for practical use in medical and sports applications,” said Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering, whose research group previously developed an on-skin patch that measured blood oxygen.
The team has developed an electrode made from nanoscale meshes containing a water-soluble polymer, polyvinyl alcohol (PVA), and a gold layer – all materials that are considered safe. The device can be applied by spraying a tiny amount of water, which dissolves the PVA nanofibers and allows it to stick easily to the skin. It can even conform to the shape of sweat pores and ridges of the skin.
The team then carried out skin patch tests on 20 people and found that after a week, there were no signs of inflammation or irritation. It also tested the device’s mechanical durability through repeated bending and stretching of a conductor attached on the forefinger, in excess of 10,000 times. The researchers found that the mesh was as reliable as an electrode for electromyogram recordings, where its readings of the electrical activity of muscles were comparable to those obtained through conventional gel electrodes.
“It will become possible to monitor patients’ vital signs without causing any stress or discomfort,” said Someya.
As well as medical applications, the new device could allow continuous, precise monitoring of athletes’ physiological signals and bodily motion without hindering training. | <urn:uuid:4d82309e-b89c-42ab-beaf-4ca58def6c82> | CC-MAIN-2022-40 | https://internetofbusiness.com/nanomesh-wearables-health-monitoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00577.warc.gz | en | 0.960113 | 523 | 3.25 | 3 |
While the field of artificial intelligence (AI) has been around for some 60 years , it’s now finally a part of our daily lives — including how we work, bank, shop, interact, invest, drive and get insured.
The term AI means different things to different people, but at PwC we think about it on a continuum, moving from assisted to augmented and, finally, autonomous intelligence. Here, I am primarily focusing on assisted intelligence — applications that help us better perform tasks we’re already doing today. This includes things like email filtering, automated processing of insurance claims and customer service chatbots, just to name a few applications.
Will AI and Robotics Replace Our Jobs?
Of course when you’re talking about AI, the question of automation and its potential to replace human jobs isn’t far behind. There have been many sobering predictions, including one by PwC’s own economic analysts , which suggests that around 38 percent of U.S. jobs could potentially be at high risk of automation by the early 2030s, followed by Germany (35 percent), the U.K. (30 percent) and Japan (21 percent). The automation appears highest in the transportation (56 percent), manufacturing (46 percent) and wholesale/retail (44 percent) sectors, but lower in healthcare and social work (17 percent). But is this the entire story? No. In reality, not all of these jobs will actually be automated, for a variety of economic, legal and regulatory reasons. Furthermore, new automation technologies will create new types of jobs in the robotics, software and decision support domains. Additionally, productivity gains will generate added wealth and spending that will support an ever-increasing amount of service jobs. Similarly, history has proven the same alongside the automation of the agriculture and manufacturing industries over the last century.
Use AI to Work Without Barriers
The net long-term impact of AI and automation on total employment isn’t a given — it could be either positive or negative. It will be a balance between an evolving workforce and the pace of technology advancement. Average pre-tax incomes should rise due to the productivity gains, but these benefits may not be evenly spread across income groups. For instance, the U.S. high school graduation rate reaching 83.2 percent (as shared by the White House in 2016), up from around only 40 percent post-World War II, may put pressure on the availability of unskilled labor. | <urn:uuid:ef49b76c-34b6-4c6c-8faa-9e9676450b6e> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/06/03/artificial-intelligence-role-of-workers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00577.warc.gz | en | 0.939346 | 505 | 2.59375 | 3 |
When a person sees a call from an unknown number and picks up to hear a recorded voice on the other end, they've received a robocall. Some are helpful, such as reminders of upcoming doctor's appointments or school announcements.
However, the vast majority are from unsolicited parties trying to convince people to purchase products or services, or to disclose personal information.
Robocalls are undoubtedly annoying, especially when they disrupt meetings, meals, or quality time with loved ones. But these intrusive calls pose serious threats to data privacy, too. And they're on the rise.
How common are robocalls in the US?
The problem with increasing numbers of robocalls in the United States is well documented. The Federal Communications Commission (FCC) receives over 200,000 complaints about robocalls each year, representing about 60 percent of their total complaint volume.
According to the YouMail Robocall Index, which measures robocalls placed and received nationwide, 43.3 billion robocalls were placed so far in 2019, with an average of 131.9 calls received per person. For comparison, YouMail's data shows more than 48 billion robocalls for 2018—about 18 billion more than the 2017 total. If 2019 numbers hold, we'll likely see at least 10 billion more robocalls than we did last year.
The YouMail Index also shows that each US person received an average of about 14 robocalls last month. However, the calls come much more frequently in some area codes. Households in the 404 area code of Atlanta, Georgia, and its surrounding suburbs, for example, received more than 60 calls in September 2019.
Robocalls are particularly unceasing for some high-profile people. One opinion writer for The Washington Post stated that she received more than 14 robocalls in a single day—by 10 a.m. Not surprisingly, 52 percent of people who responded to a survey carried out by B2B research firm Clutch said they received at least one robocall per day, and 40 percent got multiple calls.
Court rulings and formal complaints
Some people find their lives so disrupted by robocalls that they file formal complaints or take legal action. In 1991, the Telephone Consumer Protection Act (TCPA) was signed into law prohibiting all pre-recorded or auto-dialed calls and texts to cell phones without explicit consent. In addition, the National Do Not Call Registry (DNC) was formed, allowing users to explicitly opt out of telemarketing calls.
Since 2017, the Federal Trade Commission (FTC) found that 66.8 percent of complaints filed to the DNC registry relate to robocalls—totaling a little more than 12 million. Of all complaints filed, the most popular call topic was about reducing debt, while "imposters" was ranked as second.
While the TCPA states that consumers may receive monetary payout for individual violations, including robocalls, court cases haven't always supported this literal translation. An August 2019 ruling on Salcedo v. Hanna, a TCPA-related case, stated a single unsolicited text message was not injurious enough to proceed with a lawsuit.
Nuisance calls vs. high-risk
While users might be tempted to deduce they needn't worry about data privacy with robocalls, a high number of imposters, fraud, scams, and spoofing activities associated with robocalls indicates otherwise.
Transaction Network Survey looked at robocalls in a 2019 report and split them into two categories: nuisance and high-risk. Nuisance calls are not considered malicious and are often based on non-compliance, while high-risk calls center on fraudulent activity, such as scams delivered to collect money or personal details.
The report concluded that nuisance calls increased by 38 percent over the last year, while high-risk calls rose by 28 percent in the same timeframe. While nuisance calls are increasing at a higher rate than high-risk calls, continuing malicious robocall activity demonstrates the need for constant user awareness, as criminals are becoming more clever with their scamming techniques.
For example, robocalls don't just arrive as unknown numbers. One in 1,700 mobile numbers are hijacked by robocall spoofers every month, more than double last year’s rate of one in 4,000 mobile numbers. As a result, 2.5 percent of people who have had their number hijacked have disconnected their phone. In addition, spoofed numbers easily trick users into picking up the phone, believing they'll hear a recognizable voice on the other end.
Robocalls collect PII
A startling statistic from the Clutch survey revealed 21 percent of people accidentally or intentionally gave information to a robocaller. Various factors may compel them to do so. For example, the Clutch data showed health topics were a common subject for robocalls. Similarly, most of the FTC's DNC call complaint data related to debt relief calls.
Scammers of all types focus on urgency. They convince people that if they don't act quickly, they'll face dire consequences. When a victim hears about something related to their health or money, they may offer personal details without taking the time to investigate. Also, a phone call requires in-the-moment communication, and many people instinctually respond politely to avoid conflict.
The time of day robocalls happen could also make individuals more likely to disclose their data in haste. Insider scrutinized five years of FTC call data and determined that unwanted calls most likely occurred on weekdays between 10 a.m. and 11 a.m.
That's when many people are at work, or at least trying to be productive. If they answer the phone and hear a robocall recording, they may think the quickest way to get relief from the annoyance is to give what's requested, especially if the robocall seems legitimate.
Scammers use real data
Another threat to data privacy from robocalls threatening is the growing trend of scammers using genuine data to make their calls seem realistic. First Orion conducted a study of scam calls—not restricted to the robocall variety—and described a tactic called enterprise spoofing.
It involves scammers using actual data—often obtained from large-scale breaches—to impersonate real businesses and convince victims to give up personal details and money. The company's statistics showed three-quarters of people reported scam callers had accurate information about them and used those tidbits to put the squeeze on victims.
Indeed, most robocalls feature automated voices on the other end of the line, and people may never talk to humans. But, it's not hard to imagine how scammers could create a robocall message applying to a large segment of users, then snatch up individuals fooled by the scheme in follow-up real-time conversations.
How to protect against robocalls
The robocall problem opened an opportunity in the marketplace to develop apps that could block robocalls, or at least identify them. Many security vendors, including Malwarebytes, offer programs that flag or block scam calls and filter unwanted texts. These programs work in part by blacklisting numbers of known scammers, but also by using algorithms that recognize spoofing techniques or block numbers by the sheer volume of calls they place.
However, research indicates some scam call-blocking apps send user data to third-party companies without users' knowledge, or as specified deep within a multi-page EULA document. So we recommend users be critical about which apps they use to block unwanted calls.
Other ways to protect against robocalls include the following:
- Add your phone number(s) to the FTC's Do Not Call registry.
- Manually add numbers from robocallers into your phone's block list, located in "settings" for most devices.
- Don't pick up the phone if you don't recognize the number.
- Sign up for your carrier's call blocking service.
Data is king
If the last year of privacy scandals and data breaches from social media giants, educational institutions, cities and local governments haven't demonstrated this fact enough, the growing rate of robocalls further confirms that personal data is a valuable asset worth protecting from cybercriminals' greedy clutches.
Besides causing immense frustration for users, robocalls threaten user privacy by exposing victims to data-stealing scams. That reality gives users yet another reason to err on the side of caution when giving out personal information, even if the source seems authentic. | <urn:uuid:d50f6806-6668-40ad-8233-95f8a42d70e0> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2019/10/growing-rate-of-robocalls-threatens-user-privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00577.warc.gz | en | 0.952143 | 1,747 | 2.71875 | 3 |
Photo Credit: © kras99 - stock.adobe.com
Multiple servers are offline. Users are unable to utilize critical applications. No one seems to know where the applications reside, or who is affected by the issue, making it impossible to know how to get to the root of the problem or even begin troubleshooting. Pandemonium ensues. Productivity takes a downward spiral and eventually comes to a complete halt. A workplace disaster like this can easily be avoided with the help of a configuration management database (CMDB).
The accuracy that a CMDB provides is key to achieving environmental transparency, by recording the relationships and dependencies of devices, applications and workloads, either manually or automatically with the help of discovery tools. Having the ability to monitor, track and report on the complex relationships of environmental components provides a better understanding of how business systems and processes are impacted, as well as relevant personnel.
The beauty of a CMDB is that it provides both a holistic picture of environmental health and accurate nitty-gritty details. Individual items tracked in a CMDB, called configuration items, may refer to physical hardware (such as servers), logical entities (such as databases or virtual machines), or a conceptual entity like a service. Having the power to view dependencies makes for more efficient and accurate decision making and troubleshooting. Determine who may be affected by downtime due to a server outage before it even happens, so that risky guesswork can be avoided and productivity won’t suffer.
The larger an enterprise’s IT infrastructure grows, the more there is to monitor to ensure compliance and improve risk management. Being able to run an internal audit of your environment spares you from any unpleasant surprises during external audits. Visibility across your environment is a fundamental requirement for enforcing a robust cybersecurity posture. Being that bring your own device (BYOD) is on the upswing, the idea of a network perimeter is arguably nearing obsolescence. A CMDB provides the ability to log devices that move in and out of an environment, which facilitates easier targeting and patching of any potential security vulnerabilities.
While a CMDB’s accuracy is infinitely better than keeping a manual ledger of all the moving parts in your environment, it is only as good as the data that is entered. If a CMDB contains low quality, inaccurate or poorly maintained information, it cannot be helpful. If incidents are incorrectly classified, it is not possible to maintain correct historical data, which makes it harder to draw accurate conclusions about technical snafus. Additionally, without using asset management or discovery tools in conjunction with the CMDB, it becomes increasingly difficult to manage the financial impact of things such as licensing changes, equipment refreshes or losses. It is also possible to have too many assets or configuration item categories, which may render information useless if it requires users to sift through the data. The value of the CMDB is also evident when a technician can immediately identify what device is offline and how to provide a workaround as soon as possible, so proper classification and organization are imperative.
A CMDB serves as a company's central repository of information and empowers informed decision making through its ability to display views of data. However, experiencing the numerous business benefits of CMDB accuracy is impossible without conducting a proper Current State Assessment. Determining current state is the foundation for achieving CMDB accuracy. Having a CMDB as your map helps to navigate technology change events. Without this process, organizations are left with countless, interconnected disparate elements within a complex environment.
For more insight into your IT environment, speak with one our experts by clicking below: | <urn:uuid:f9ab1f0e-a209-4db7-8e0d-8f681e22ad81> | CC-MAIN-2022-40 | https://www.align.com/blog/criticality-of-configuration-management-database-accuracy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00577.warc.gz | en | 0.937887 | 737 | 2.515625 | 3 |
A group of security experts have exposed a loophole that allows hackers to successfully identify an individual, accurately track their location, as well as keep a constant eye on their file sharing habits.
Researchers have documented the vulnerability in a paper called, "I Know Where You are and What You are Sharing: Exploiting P2P Communications to Invade Users' Privacy." In the report, the researchers have tried to draw attention to potential vulnerabilities present in real time online communication platforms such as Skype. Skype happens to be a massively popular VoIP (Voice over Internet Protocol) platform, acquired by the Windows maker Microsoft Corporation earlier in the year.
“We have shown that it is possible for an attacker, with modest resources, to determine the current IP address of identified and targeted Skype user[s] (if the user is currently active),” the 14-page paper states, according to a report (opens in new tab)by The Register.
“In the case of Skype, even if the targeted user is behind a NAT, the attacker can determine the user's public IP addresses,” it added.
The attack, according to researchers, could be carried out for a wide variety of purposes, including tracking an individual’s mobility, as well as for monitoring his or her Internet usage. | <urn:uuid:ea079dff-6f07-49f6-af70-157d3cf2384c> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/10/21/internet-security-report-exposes-vulnerabilities-hackers-exploit-find-victims/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00577.warc.gz | en | 0.955511 | 265 | 2.59375 | 3 |
As with many of our former societal norms, the educational institution with which we are familiar is changing and evolving as a result of the COVID-19 pandemic. Students missed out on graduations, proms, and the last few months of the current academic year—what is even more disheartening is that we are faced with the possibility of schools remaining closed for the next academic year. We have seen drive-by birthday parties taking the place of large celebrations, video conferencing platforms replacing social gatherings, online classrooms, and remote work taking over the typical brick and mortar school or office setting. These adjustments have shown us that we are resilient and able to evolve and adapt in ways we would have never foreseen possible.
Parents have been tasked with doing it all from upholding their careers to homeschooling while being stay-at-home parents, which despite all the difficulties, has encouraged them to consider homeschooling their children from the next academic year.
It is important to decipher the difference between homeschooling and virtual schooling. Homeschooling involves the student’s parent as the primary educator, diving into unique instruction and often focusing on familial values and integrating this into academic instruction. Virtual schooling mirrors traditional face-to-face instruction, with certified teaching leading the virtual instruction and utilizing a plethora of platforms and delivery methods.
Still, we are faced with the question as to what this new normal can teach us? Where will we be, and more importantly, where will the future leaders, today’s children and students, be after experiencing firsthand the evolution of the academic institution? While the future is unclear and the path of the pandemic is uncharted, parents, educators, administrators, and in some cases, government officials are faced with the challenge of preparing for the unknown. The possibilities for the educational system range from putting simple adaptations in place such as smaller class sizes to larger, more involved changes like moving to an entirely remote or virtual platform. However, one thing is known: education and the classrooms have already evolved since we have encountered COVID-19, and the educational system, as we know will look different.
How can artificial intelligence and technology improve student performance?
According to a 2016 Stanford report on the trends of AI, we are making a “move from intelligent systems to systems that are human-aware and trustworthy,” with trustworthiness being an integral part of that shift (Stone et al., 2016, p.15). Experts do believe that education will most likely always require a human interaction component that cannot be replaced with computer-based learning systems; however, the projection surmises that gradual AI integration will ideally assist instructors and thus improve student retention and performance (Stone et al., 2016).
A promising hope for AI in academia is to drive self-paced, individual learning, incorporating the possibility of blended classrooms, and student-driven education (Stone et al., 2016). Naturally, this will require a level of parent involvement to ensure appropriate motivation in younger learners, but will ultimately benefit all levels of education in the United States as it supports learning at the individual student’s pace as well as matching the way they learn, ultimately personalizing education.
Virtual experiences, and eventually virtual reality are taking students on explorations far outside the scope and sequence of a usual field trip outing and are vastly expansive of the classroom itself. Students are able to explore museums, parks, zoos, aquariums, and more through online streaming and higher-level students (university level) are experiencing virtual reality in the form of archaeological digs (Stone et al., 2016). The possibilities for career and job training are truly endless when we consider where this technology is heading. Think about the medical field and surgical training already occurring with robotics—big things are coming with AI advancements. Now, it is time for educators and students to explore new learning tools and opportunities. As a companion robot, Roybi Robot continuously enhanced to be more than an educational tool but also an emotional support provider, especially while we’re all tackling the mental toll of social distancing and isolations. Machines can never replace a teacher’s place, but an AI-power robot can assist a teacher and parents with providing individualized attention to each child’s skills and needs.
How can families integrate technology at home?
As we consider the future of America’s education system and how the global pandemic has impacted our academic structure, we would be remiss if we did not consider the implications technology will have on the home-school environment.
While the financial investment often required to implement advanced technologies places a barrier to education, especially homeschooling, there are ways to take advantage of technology during these unprecedented times as we watch schooling evolve. As mentioned, virtual experiences have filtrated the home over recent months and most of these are free. What makes this shift from school to home effective, is that children are willing and excited to engage with technology and find it interesting and fun. Sure, technology is not perfect and still has more room for improvement for it to be fully integrated within our educational systems and as assistance at home. Yet, even this is a lesson or perhaps a source of inspiration for our future leaders and inventors. Discussions about the benefits, problems, and limitations of the technology are essential conversations that should be regularly discussed at home to make sure the youngest amongst us are smart users of different technological tools.
In conclusion, it is inevitable that we will witness shifts in the way we currently implement education in the United States. As we adapt our traditional methods and approaches to education, technology integration will be inherent. The future is promising and although uncertainties lie in exactly how this new system will look, we know that we are headed towards great things, combining the best that AI has to offer with educator expertise. Individualized, student-driven instruction has long since been a goal of our education system and as advancements progress, we are closer to meeting the needs of all students, giving each and every learner the chance to succeed.
Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed: September 6, 2016. | <urn:uuid:30bb29f8-996d-422c-8fca-62f88b259b68> | CC-MAIN-2022-40 | https://coruzant.com/ai/technology-and-the-future-of-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00777.warc.gz | en | 0.954836 | 1,373 | 3.125 | 3 |
What Is Phishing? How to Spot It and Stop It
Phishing is a social engineering attack where criminals send fraudulent messages—usually by email—purporting to be a legitimate business, organization, or person. The goal: trick a user into sharing sensitive data like login credentials or deploying malware.
Phishing is a cyberattack where criminals send fraudulent messages to trick a person into revealing sensitive information or downloading malware. It’s primarily conducted via email, though attackers can also use phone calls and text messages. Regardless of the delivery method, a phishing attack is usually disguised as legitimate communication from a known organization or individual.
Phishing is one of the most common, effective, and devastating cyberattacks in play. 44% of cybercrime losses come from business email compromise and phishing. It uses social engineering and link manipulations to trick humans instead of network systems. With increasing sophistication, phishing is a challenge to detect on a company-wide scale. Learn how phishing works, how to spot phishing attacks, and how Abnormal Security can stop it.
What is a Phishing Attack?
A phishing attack is a fraudulent communication that is designed to trick a person into giving up private information (like passwords or credit card numbers), paying money, or downloading malicious software. The end goal of most phishing attacks is the same: get paid, either by stealing banking credentials, committing invoice fraud, or holding data and systems ransom.
Phishing attacks use social engineering to target both individuals and businesses.
On an individual level, criminals use phishing to steal personal information.
On a business level, criminals use phishing to install ransomware and steal data.
It’s a social engineering attack because it relies on human error and gullibility instead of bypassing security systems. A victim may get a fake email impersonating their bank, for example, that says their account is suspended for fraudulent activity. The email contains a link to the bank’s website and urgently asks the user to log in to prevent account suspension.
The reputable-looking website is actually a phishing site masquerading. Once the user tries to log in, attackers have access to their banking credentials.
This is just one example of phishing. There are a variety of delivery methods and payout strategies.
Types of Phishing Attacks
Phishing is an umbrella term for various types of phishing attacks. While email phishing is the most common, there are several other phishing examples, including:
Email Phishing: Emails that trick you into revealing sensitive information or downloading malware.
Spear Phishing: A targeted form of email phishing that focuses on a single specific victim rather than a large group.
Vishing: Voice phishing, usually done via phone call or voice message.
Smishing: Phishing attacks delivered through text messages.
Pharming: Maliciously redirecting users from a legitimate website to a fake version, by malware or DNS spoofing.
Whaling and CEO Fraud: Phishing attacks that specifically target or impersonate high-ranking executives.
Angler Phishing: Phishing attacks targeting social media users, usually by impersonating brand accounts.
URL Phishing: Directing users to spoofed websites with fake URLs.
One of the more dangerous phishing attacks is credential phishing. This is a more targeted attack compared to the mass sending involved in other email phishing attempts. Attackers leverage prior knowledge of a target, such as their job title, responsibilities, and even close business contacts. They’ll use this data to dupe the victim into revealing sensitive information.
Common Signs of Phishing
There are several tell tale signs to help you spot a phishing email:
Urgency: A cornerstone of phishing is manufactured urgency. Attackers frighten targets with urgent messages about impending account closures, legal trouble, or a time-sensitive invoice.
Typos: Phishing attempts often come with bad grammar and misspellings. Authentic communication from a bank, for example, usually doesn't have grammatical errors.
Suspicious Links: Phishing emails may include a link with anchor text that appears legitimate. Upon closer inspection (like hovering over the link), it’s a spoofed URL.
Unfamiliar Attachments: Phishing attacks include “important” attachments like invoices, which are just viruses.
Email Address Domain: The sender’s email address looks similar to a company's domain name or organization. For example, @companya.com becomes @company-a.com, or firstname.lastname@example.org.
Detecting and Preventing Phishing Threats
Identifying a phishing email is tricky. There’s a reason it’s such an effective attack, so you can’t rely solely on end users to detect and prevent phishing attacks.
You need a strong, modern email security framework so phishing attacks don’t reach inboxes. Traditional products like secure email gateways and Microsoft and Google’s built-in security systems have trouble detecting certain sophisticated phishing attacks. These attacks rely on social engineering and don’t have some of the obvious phishing characteristics.
These are characteristics of an advanced phishing attack that can evade traditional security measures:
Passes a reputation check
Doesn’t contain suspicious links or attachments
Appears to come from a trusted contact
Abnormal Security can detect the phishing threats that outdated security misses.
Abnormal Security vs. Phishing
This email passes traditional security checkpoints. At first glance, it comes from an internal IT account and it’s a straightforward message: you need to update your VPN. Harmless, right?
Wrong. First, the email’s display name does not match the actual email address so it doesn’t pass DMARC. This is a sign of a spoofed domain. Second, the urgent request (“now required”) for login credentials sets off alarms. Third, the link appears legitimate but actually redirects to a lookalike site. Due to these factors, Abnormal flags it as a phishing attempt.
Since phishing attacks leverage urgency, any effective solution should account for that. Abnormal’s phishing security solution analyzes tone and language within the email. We can tell when an email is imposing unnecessary urgency or requesting a financial transaction, for example.
On the surface, many phishing emails come from trustworthy sources. But threat actors often send emails where the display name in the email header doesn’t match the actual sender address. Abnormal detects and flags email addresses that don’t pass sender authentication tests.
Phishing attacks usually leverage links or attachments to send a malicious payload. Abnormal inspects all links and attachments for suspicious content. For example, a link requiring login credentials, or a redirected URL that doesn’t match the anchor text.
Finally, Abnormal detects and locks the compromised accounts in your organization.
The more phishing emails Abnormal Security blocks, the smarter it gets. It's how the system can counter evolving phishing threats that also get smarter. Because the truth is that getting hit by a phishing attack is a matter of when and not if. And with attacks getting more sophisticated and prevalent, your protection must also match up.
Ready to evolve your phishing protection and enhance your email security? Get a demo to see how Abnormal Security can help protect you. | <urn:uuid:fb941f5a-103d-4166-8ba9-b08b0a84280b> | CC-MAIN-2022-40 | https://abnormalsecurity.com/glossary/phishing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00777.warc.gz | en | 0.904585 | 1,535 | 3.5625 | 4 |
As researchers find more security flaws in Oracle Java, the software continues to be used for exploitation and malware delivery. This year has been a shaky start for the cross-platform web technology, where it seems the number of documented vulnerabilities is hard to number.
If you recall in January, we saw a zero-day later found to be responsible for intrusions into companies like Microsoft, Apple, Facebook, and Twitter. Then in February, after seeing a Java patch with over 50 security fixes, reports surfaced thereafter that Bit9 was hacked using a separate java zero-day. Even still in March, an emergency patch was issued to address even more vulnerabilities.
Because we’re seeing java used more in malware, it’s important for researchers to know how to analyze and understand java code.Let’s take a look at one java archive (“jar”) we’ve seen in the wild that not only contains multiple exploits but also has an encrypted malware payload. This sample was provided by Malwarebytes researcher Jerome Segura and is called “sexy.jar”. The landing page, “sexy.html”, loads the jar as an applet and points to Q.class, a Java class file within the jar. To get more details on this, check out Segura’s blog entry on this here.
It’s important to understand that a jar is essentially just a zip archive, a file-format you’ve probably seen since you started using computers. Inside the archive are various things, most important of which are class files, or compiled java bytecode. This bytecode is executed within a Java Virtual Machine (JVM), part of the Java Runtime Environment (JRE), a term dubbed by Oracle describing Java’s execution environment. Many of you with Java installed on your computer use the JRE every day when you visit your favorite websites.
In order to streamline analysis of java class files, we can use a popular tool known as a Decompiler, which attempts to decompile programs into their original source code. The Java Decompiler project offers a graphical utility called “JD-GUI” for displaying Java sources, and is my personal favorite and one of the best in the field. Another great tool for those who prefer the command-line is JAD, which essentially does the same thing and can be found here. Both of these tools are available on Windows, Mac, and UNIX-based systems.
Analysis Let’s go ahead and take our jar and decompile it using JD-GUI. After that, we can view the code statically and attempt to understand what’s going on.
When we load sexy.jar into JD-GUI, we see a package called “game” and six class files, along with another file titled “sexy”. As I mentioned before, the “Q” class in the jar is loaded as an applet, which will reference other packaged class files throughout execution. The file labeled “sexy” contains an encrypted malware payload that will be dropped to the disk and executed. This is not a traditional approach as a jar usually doesn’t contain the malware itself.
You’ll instantly notice that all the strings are part of the “O” class. These are all encrypted using rot13, a simple substitution cipher that I talked about here. You’ll notice that every string declared in this class is first passed through the rot13 function at the bottom of the code.
Here are the decrypted strings used in this jar:
CVE-2012-0507 The CVE-2012-0507 exploit is attempted first, implemented in the C and Z classes. CVE-2012-0507 is a vulnerability in the JRE that occurs because the AtomicReferenceArray class does not check if an array is of an expected Object type (you can read more about this here).
The C class contains a long hex string (as seen above) that decodes to methods used for the exploit.
Eventually the “Z” class creates a new class during runtime (game.N) to drop the malware in %temp%\XrwfQ_w.exe
The new class first has to be decoded in the "W" class XorDecrypt function; this takes a large encrypted bytecode array called encoded and decrypts it as the "N" class.
Finally we can see the file is decrypted and dropped within the “N” class, using the dropFile function.
CVE-2013-0422 The second exploit, CVE-2013-0422 is called if you’re running Java 7 and is implemented in the T class. The exploit uses a private mBeanInstantiator object and the findClass method to reference arbitrary classes, which in this case is also our embedded “N” class. If the jar takes this exploit route, the payload is dropped in in %temp%\GWiL2S.exe
Debugging an applet In some situations you might want to see things dynamically as they execute instead of the plain static view. This can be accomplished with our jar by debugging it as an applet.
Debugging a jar isn’t as straightforward as a native system binary, like an EXE. One of the best methods I’ve found is using the Eclipse IDE for Java Developers to step through the code. However, if you’re going to take this route, you’re going to need to do a little prep work.
First we’ll need to overwrite library files in the JRE install directory with those from the Java Development Kit (JDK), a tool used to assist Java developers. We need to do this because the library files in the JDK are compiled with debugging information that you’ll need to step into core java classes. Here are the steps to do this:
- Backup the .jar files from JRE_HOME/lib
- Download and install a JDK for the SAME VERSION as your JRE.
- Copy the .jar files from JDK_HOME/jre/lib to JRE_HOME/lib
Once you’ve completed this step, you can launch Eclipse and create a new project. You’ll want to set it up in a similar way to the jar you’re analyzing (in this case, a package called game and all the java sources inside). Here is what mine looked like below.
Next you’ll need to build a Debug Configuration for the applet. Make sure that you pay attention to any parameters the applet might need to execute properly (in the case of this jar, there are 3).
Now you need to set a breakpoint in your code and you can start debugging. Also, you may need to add java source files to your project’s build path if you want to step into java system libraries and observe that code.
Notice how I’ve taken a few steps in the code and already retrieved the OS name, Java version, and some parameters. I can continue to step through the code and terminate the applet when desired.
Conclusion I hope this article gave you a better understanding of the java exploitation landscape.
Understanding how to analyze java code is necessary as the web technology from Oracle continues to be exploited; there’s no doubt we’ll continue to see jars used in malware, as well new techniques like embedded class files and encrypted malware payloads within the jar to keep researchers on their toes.
With some practice and prior programming knowledge, most java code can be understood when viewing decompiled source code. Debugging is always an option too, but the setup time can be lengthy, so it may not be worth the effort in some cases. If you do end up choosing this route, remember to do so in a secure, isolated environment, like a Virtual Machine, to prevent malware infections. When you analyze and execute malware, you do so at your own risk, so take plenty of precautions.
Joshua Cannell is a Malware Intelligence Analyst at Malwarebytes where he performs research and in-depth analysis on current malware threats. He has over 5 years of experience working with US defense intelligence agencies where he analyzed malware and developed defense strategies through reverse engineering techniques. His articles on the Unpacked blog feature the latest news in malware as well as full-length technical analysis. Follow him on Twitter @joshcannell | <urn:uuid:268daa78-f189-4c66-8aea-3764d5bdcf40> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2013/04/malware-in-a-jar | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00777.warc.gz | en | 0.890034 | 2,064 | 3.109375 | 3 |
May 10, 2022
VMware VXLAN Explained: Advantages and Implementation
Networking requirements continue to grow every year. Modern networks are expected to deliver high speed, low latency, and high scalability. Another common requirement is secure isolation of network segments. Virtualization in datacenters also increases demands on the physical network infrastructure, and traditional networking becomes irrational with potential network issues.
Network virtualization is used to abstract from the underlying physical networks and create scalable and logical networks. Network virtualization works similarly to virtualization of computing resources (such as processor, memory, and storage), which makes it possible to work with these resources on an abstracted layer.
This blog post covers:
What is VXLAN?
Virtual Extensible Local Area Network (VXLAN) is an overlay network technology. It is an encapsulation protocol that provides tunneling of layer 2 (L2) connections over an underlying layer 3 (L3) network (below you can find a table with seven OSI layers). The overlay network is a network created on top of any existing network. The underlay network is the physical infrastructure used for an existing network above which the overlay network is built.
The components of the underlay physical network include physical hardware, cables, and network protocols. Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) are widely used protocols for routing on L3. Common examples of overlay networks are different types of virtual private networks (VPNs), IPSec tunnels, and peer-to-peer networks.
VXLAN is defined by the RFC 7348 standard of the Internet Engineering Task Force (IETF). The standardized specification of the VXLAN protocol was developed in collaboration between Cisco, VMware, and Arista, though the standard is not vendor-locked. VXLAN is supported by solutions like VMware’s virtualization software and hardware devices such as routers from various vendors.
VXLAN allows you to create highly scalable logical networks with the support of multi-tenant broadcast domains and span physical network boundaries. These logical networks are overlay networks. When you decouple the virtual network from a physical network, you simplify the management of large networks, despite the complex initial configuration. When VXLAN is used, you can re-design the overlay network without needing to reconfigure the underlay (physical) network. It is possible to use two or more underlay L3 networks to deploy a virtual overlay L2 network domain. The Leaf-Spine network topology is a good solution for the underlay network to configure VXLAN overlay networks in large datacenters.
Where Can VXLAN Be Used?
The most common use of VXLAN is for creating virtual networks over existing physical and logical network infrastructure when deploying a software-defined datacenter. Abstraction from the underlying physical infrastructure is done for virtualization in software-defined datacenters (SDDC). VXLAN and VMware virtualization solutions allow you to configure a fully virtualized datacenter, in which networks and computing resources are virtualized. The two software products for this purpose are VMware vSphere and VMware NSX. There are two editions of the VMware network virtualization solution: NSX-V and NSX-T.
With VXLAN, virtual machines (VMs) running in VMware vSphere can connect to the needed logical network and communicate with each other even if they are located on different ESXi hosts in different clusters or even in different datacenters. VXLAN logical networks are abstracted from the underlying physical networks, and VMs are abstracted from the underlying hardware.
Without VXLAN, there are higher demands on operating with Media Access Control (MAC) addresses on physical network equipment in datacenters where VMs are running and connected to the network. Many modern datacenters (including datacenters that have virtualization servers) use the leaf-spine network topology and the top-of-rack (ToR) connection scheme. When VMs use a physical network, even with virtual network (VLAN) isolation of network segments on the second layer, the ToR switches (to which rack servers are connected) must operate with MAC addresses of physical network devices and VM network adapters to provide the L2 connectivity (instead of learning one MAC address per link). MAC address tables become too large, which causes switch overload and significantly higher capacity demands of MAC address tables compared to non-virtualized environments. When table overflow happens, a switch cannot learn new MAC addresses, and network issues occur.
Traditional VLAN, spanning tree protocol (STP), and Equal-Cost Multipath (ECMP) cannot perfectly resolve all network issues in a virtualized datacenter. Using overlay networks with VXLAN helps resolve this issue. VM MAC addresses operate only in a virtual overlay network (VXLAN network) and are not sent to physical switches of an underlay network. Moreover, VLANs that are used for network isolation of L2 domains and in multi-tenant environments provide higher limits compared to VLAN. Let’s compare VXLAN vs VLAN to see the main differences between the two.
VXLAN vs VLAN Comparison
The main difference between these network protocols is that VLAN uses a layer 2 underlay network for frame encapsulation while VXLAN uses layer 3 for this purpose. The maximum number of overlay networks is higher for VXLAN.
VLAN is documented in the IEEE 802.1Q standard. The maximum number of VLANs supported is 4094 due to the 12-bit segment ID: 2^12=4096, VLAN IDs 0 – 4095, 2 reserved VLAN IDs (0 and 4095 are reserved). These days, 4094 is not enough for large cloud service providers. When using VLAN tagging, the size of an Ethernet frame is increased from 1518 to 1522 bytes. When using VLAN, networks are logically isolated on the L2 by using 802.1Q tags. Configuration of physical network equipment is performed for network segmentation.
VXLAN is an extended analog of VLAN. Some of the main differences between VLAN and VXLAN include:
- The maximum number of virtual networks supported by VXLAN is more than 16 million (2^24= 16,777,216) due to the 24-bit length of the network identifier.
- VXLAN and VLAN use different encapsulation techniques. VXLAN doesn’t require trunking, unlike VLAN, and, STP is not required. It’s not necessary to use VLAN tags when VXLAN network identifiers are used.
- Reconfiguring physical network equipment is not required for a VXLAN configuration.
- Managing large L2 networks becomes difficult in large distributed physical infrastructures. Managing L3 networks is more convenient. VXLAN overlay networks that work over existing L3 networks allow administrators to avoid the usual disadvantages of traditional L2 networks when L2 networks are virtualized by using VXLAN and are not dependent on the physical boundaries of real networks.
Let’s recall the 7-layer OSI model and explore the working principle of VXLAN networks in the next section of this blog post.
The 7-layer Open System Interconnection (OSI) model:
|Layer||Protocol Data Unit||Examples of protocols|
|7||Application||Data||HTTP, FTP, SSH, DNS|
|5||Session||Data||Various APIs, sockets|
|4||Transport||Segment, Datagram||TCP, UDP|
|3||Network||Packet||IP, IPSec, ICMP, IGMP|
|2||Data link||Frame||Ethernet, PPP, FDDI|
|1||Physical||Bits||Wires, Fiber, Wireless|
How Does VXLAN Work?
VXLAN encapsulates inner L2 Ethernet frames into L3 IP packets by using UDP datagrams and transmits them over an existing IP network. The VXLAN encapsulation type is known as MAC-in-UDP, which is a more precise term for the technology.
Why is UDP used? Why isn’t encapsulation of VXLAN frames done directly into outer IP packets? L3 networks are convenient for administration, and, as we mentioned earlier, the L3 network is the underlay network for the VXLAN network (which is the overlay network).
The VXLAN header, which is 8 bytes in length, is added to the original Ethernet frame (the inner frame). This VXLAN header is needed to allow a switch on the other side to identify the VXLAN Network Identifier (VNI) to which the frame belongs. Most of us probably would like to package the original frame with the VXLAN header into an IP packet, similar to the Generic Routing Encapsulation (GRE) protocol that is the L3 tunneling protocol.
There is a Protocol field in the IP header (see image below) used to define the data of the higher layer protocol (see the table with the OSI model above) that is packaged into the current IP packet. GRE has protocol number 47, which is defined in the Protocol field of the outer IP packet. VXLAN doesn’t have any associated protocol number, and such packaging directly to an outer IP packet would cause issues. For this reason, VXLAN is packaged by using UDP and, after that, is encapsulated into IP packets. GPRS Tunneling Protocol (GTP) uses a similar approach. The VXLAN UDP port number is 4789. This VXLAN port number should be used as the destination UDP port by default.
You may be thinking: TCP is more reliable. Why is UDP used, not TCP? TCP has a mechanism for checking whether data was received and transmitted successfully without loss. If the data was lost, the data is sent again. UDP doesn’t have these mechanisms. If data is lost due to connection issues, this data is never resent. UDP doesn’t use sessions and timeouts like TCP.
If we would use TCP over TCP, and packets are lost in the underlay session, they are also lost in the overlay session. Packet retransmit is initiated in the underlay and overlay TCP sessions, which causes network performance degradation. The fact that UDP doesn’t initiate a point-to-point (P2P) session is an advantage in the case of VXLAN encapsulation. Note that point-to-multipoint (P2MP) sessions are not available for TCP connections.
VNI or VNID is the VXLAN network identifier. The 24-bit VXLAN network identifier (also called a segment ID) is used, and it defines the maximum supported number of VXLAN networks.
The VXLAN Tunnel Endpoint (VTEP) is an object responsible for encapsulation and decapsulation of L2 frames. VTEP is the analog of a Provider Edge (PE) Router, which is a node for service aggregation. The VTEP can be implemented as a hardware gateway or virtualized solution like VMware NSX (the software VTEP). VXLAN tunnels begin and end on VXLAN Tunnel Endpoints.
VMs connected to the same VXLAN segment can communicate with each other. If host 1 (VM1) is located behind VTEP A and host 2 (VM2) is located behind VTEP B, both hosts (VMs) must have a network interface connected to the same VNI (similar to how hosts must use the same VLAN ID in their network configuration when using VLAN).
VXLAN Frame Encapsulation
Now it’s time to explore the structure of a VXLAN frame encapsulation in detail. In the image below, you see the structure of a VXLAN encapsulated frame. The outer Ethernet header, outer IP header, UDP header, VXLAN header, and inner Ethernet frame used in a VXLAN network are shown.
Outer Ethernet (MAC) header
- Outer Destination MAC is the MAC address of a destination VTEP if the VTEP is local to the nearest router, or the MAC address of a router if VTEP is located behind the router.
- Outer source MAC is the MAC address of a source VTEP.
- VLAN Type (optional) is the optional field. 0x8100 points that a frame is VLAN tagged.
- Outer 802.1 VLAN Tag is the optional field to define a VLAN tag (not required for VXLAN networks).
- Ether type defines the type of the packet carried by this frame. 0x800 is referred to IPv4 packet.
Outer IP header
- IP Header misc. data contains version, header length, type of service, and other data.
- IP protocol. This field is used to define an underlying network protocol by which data is carried by the IP packet. 0x11 defines UDP.
- Header check sum is used to ensure data integrity for the IP header only.
- Outer source IP is the IP address of a source VTEP.
- Outer destination IP is the IP address of a target VTEP.
- UDP source port is a port set by the VTEP that is transmitting data.
- UDP destination port is the port assigned by VXLAN IANA (4789).
- UDP length is the length of a UDP header plus UDP data.
- UDP checksum should be set to 0x0000 for VXLAN. In this case, the receiving VTEP avoids checksum verification and avoids dropping a frame in case of an incorrect checksum (if a frame is dropped, packaged data is not decapsulated).
- VXLAN flags are different flags. The I flag is set to 1. The other 7 bits are now reserved and must be set to 0.
- Reserved – reserved fields that are not used yet and are set to 0.
- VNI is the 24-bit field to define the VNI.
- Frame Check Sequence (FCS) is the 4-byte field to detect and control errors.
- Let’s calculate the overhead when using VXLAN:
8 bytes (VXLAN header) + 8 bytes (UDP header) + 20 bytes (IPv4 header) + 14 bytes (outer L2 header) = 50 bytes (if VLAN tagging is not used in the inner frames that are encapsulated). If clients use VLAN tagging, 4 bytes must be added, and the result is 54 bytes.
- Let’s calculate the entire size of outer frames in the physical network:
1514 (inner frame) + 4 (inner VLAN tag) + 50 (VXLAN) + 4 (VXLAN Transport VLAN Tag) = 1572 bytes
- If IPv6 is used, the IP header size is increased by 20 bytes:
1514 (inner frame) + 4 (inner VLAN tag) + 70 (IPv6 VXLAN) + 4 (VXLAN Transport VLAN Tag) = 1592 bytes
- An extra 8 bytes can be optionally added for IPv6. In this case, the outer frame size is 1600 bytes.
- You can change Maximum Transmission Unit (MTU) values in switch configuration accordingly (for example by 50, 54, 70, or 74 bytes). Support of Jumbo frames (frames with a size higher than the standard 1518 bytes) is required in this case.
It is recommended that you increase the frame size when using virtual VXLAN networks in a real network. VMware recommends that you set MTU to 1600 bytes or more on distributed virtual switches.
Note: The Ethernet frame size and MTU are important characteristics of the frame. MTU points to the maximum size of a payload encapsulated into the Ethernet frame (the IP packet size, which has a default value of 1500 bytes when Jumbo frames are not used). The Ethernet frame size consists of the payload size, Ethernet header size, and the FCS.
Example of Data Transferring in VXLAN
Let’s consider an example of transferring data in a network with VMware VXLAN to understand the VXLAN configuration and working principle better.
Imagine that we have two ESXi hosts in a VMware vSphere environment with VMware NSX configured. VM1 is running on the first ESXi host, and VM2 is running on the second ESXi host. The virtual network adapters of both VMs are connected to the same VXLAN network with VNI 121. The ESXi hosts are connected to different subnets of the physical network.
VM1 wants to send a packet to VM2. Let’s explore what happens in this situation.
- VM1 sends the ARP packet to request the MAC address of the host with the IP address 192.168.5.22.
- VTEP1, located on the first ESXi host, encapsulates the ARP packet into the multicast packet associated with the virtual network with VNI 121.
- Other VTEPs receiving the multicast packet add the association VTEP1-VM1 to their VXLAN tables.
- VTEP2 receives the packet, decapsulates this packet, and sends a broadcast on port groups of virtual switches that are associated with VNI 121 and the appropriate VXLAN network.
- VM2, located on one of these port groups, receives the ARP packet and sends a reply with its MAC address (MAC address of VM2).
- VTEP2, on the second ESXi host, creates a unicast packet, encapsulates the ARP reply of the VM2 into this packet, and sends the packet by using IP routing back to VTEP1.
- VTEP1 decapsulates the received packet and passes the decapsulated data to VM1.
Now VM1 knows the MAC address of VM2 and can send packets to VM2, as displayed in the scheme above for VM-to-VM communication.
- VM1 sends the IP packet from its IP address (192.168.5.21) to the IP address of VM2 (192.168.5.22).
- VTEP1 encapsulates this packet and adds the headers:
- A VXLAN header with VNI=121
- A standard UDP header with the VXLAN port (UDP 4789)
- A standard IP header that contains the destination IP address of VTEP and the 0x011 value to define the UDP protocol used for encapsulation
- A standard MAC header with the MAC address of the next L2 device (the next hop). In this example, this is the router interface that has the MAC address 00:10:11:AE:33:A1. Routing is performed by this router to transfer packets from VTEP1 to VTEP2.
- VTEP2 receives the packet because the MAC address of VTEP2 is defined as the destination address.
- VTEP2 decapsulates the packet and detects that there is VXLAN data (VTEP2 identifies the UDP port 4789 and then identifies the carried VXLAN headers).
- VTEP verifies that VM2 as the target is allowed to receive frames from VNI 121 and is connected to the correct port group.
- After decapsulation, the inner IP packet is transmitted to the virtual NIC of VM2 connected to the port group with VNI 121.
- VM2 receives the inner packet and handles this packet as any usual IP packet.
- Packets are transferred from VM2 to VM1 in the same way.
VXLAN overlay networks support unicast, broadcast, and multicast communication modes in the network.
- Unicast communication is used to transfer data between two hosts in the network. Remote VTEPs are usually defined statically.
- Broadcast communication is the mode in which one host sends data to all hosts in the network.
- Multicast communication is another one-to-many communication type. Data is sent to selected hosts in the network, not to all hosts. A common example of using multicast is online video streaming. Internet Group Management Protocol (IGMP) is used for multicast communication. IGMP snooping on L2 switches and IGMP Querier on routers (L3) must be enabled.
Note that the ability to use VXLAN for multicast traffic results from the MAC-in-UDP encapsulation method (explained above), which allows establishing P2MP connections. In multicast mode, remote VTEPs can be found automatically without the need to manually define all neighbors. You can define a multicast group associated with a VNI, then VTEP starts to listen to this group. The behavior of other VTEPs is similar, and they start to listen to the group if VNIs are set correctly.
VMware VXLAN Components
VMware vSphere, with ESXi hosts, vCenter, and NSX, is the software suite needed to configure network virtualization with the support of VXLAN. Let’s explain VMware VXLAN components and their role in deploying VXLAN networks.
NSX-V is a solution to build virtual networks in a datacenter with VMware vSphere.
VXLAN encapsulation is performed between a VM’s virtual interface controller (NIC) and the logical port of a distributed vSwitch, which provides transparency for the underlying L3 network and VMs.
NSX Edge services gateway appliance acts as a gateway between VXLAN hosts (VMs) and non-VXLAN hosts. Examples of non-VXLAN hosts are an internet router, a physical server connected to a physical network, etc. The edge gateway can translate VXLAN IDs of VXLAN network segments to allow non-VXLAN hosts to communicate with hosts or VMs in VXLAN networks.
NSX Manager must be installed on an ESXi host managed by vCenter in the vSphere environment. NSX Manager is a virtual appliance used to configure and manage VMware NSX components including controllers, edge services gateways, and logical switches. NSX Manager provides a graphical user interface (a web interface) for a better user experience. After installing NSX Manager, a plugin is injected into VMware vSphere Client. It is recommended that you deploy NSX Manager in a cluster with HA and DRS features enabled. One instance of NSX Manager is used to serve a single vCenter environment.
NSX Controller, called a central control plane, is a distributed state management system to control overlay transport tunnels and virtual networks, providing routing and logical switching capabilities. NSX Controller is required to configure VXLAN networks and must be deployed as a cluster of highly available virtual appliances.
VXLAN VIB packages must be installed on ESXi hosts to support VXLAN capabilities including VTEP functionality.
vmknic virtual adapter carries control traffic, responses to DHCP requests, ARP requests, and multicast join requests. The unique IP address is used for VTEP on each ESXi host to carry VXLAN traffic in created host-to-host tunnels.
VXLAN port groups on virtual switches are configured to define how input and output VXLAN traffic is transferred through VTEP and physical network adapters of ESXi hosts.
VTEP configuration on each ESXi host is managed in vCloud Networking and Security Manager, which is a central place for managing virtualized networks.
It is recommended that you plan the NIC teaming policy, failover settings, and load balancing on a distributed virtual switch in VMware vSphere when you deploy VMware NSX with VMware VXLAN support.
Summary of VXLAN Advantages and Disadvantages
With the working principles of VXLAN configuration and VMware VXLAN implementation covered, let’s look at the advantages and disadvantages of VXLAN.
- Highly scalable networks: a high number of L2 domains that can be stretched between multiple datacenters.
- Support of multicast, multi-tenancy, and network segmentation.
- Flexibility: STP is not needed. L3 networks are used as the underlying network.
- No overload of physical networks on the second layer. Avoiding MAC table overflow on physical switches when connecting VMs to the networks.
- Centralized network management. Convenient management after deployment and configuration.
- Deployment and initial VXLAN configuration is complicated.
- It may be difficult to scale a centralized controller used for managing overlay networks.
- There is an overhead for headers due to encapsulation techniques.
- The underlay network must support multicast for broadcast, unknown-unicast, and multicast (BUM) traffic.
VXLAN is a network encapsulation protocol that is adopted for virtualization environments where a high number of VMs must be connected to a network. VXLAN allows you to build a virtual L2 network over an existing L3 physical network by using the MAC-in-UDP encapsulation technique. VXLAN network virtualization is the next step after the virtualization of computing resources to deploy a software-defined datacenter. VMware NSX VXLAN support for VMware network virtualization coupled with VMware vSphere is the right solution for this purpose. This combination is widely used by cloud service providers, especially in large datacenters.
If you use VMware vSphere VMs in your server room or datacenter, opt for comprehensive VMware ESX backup solutions like NAKIVO Backup & Replication. NAKIVO's solution offers powerful features, including incremental, app-aware backups. As you recall, VXLAN networks are adopted for use in multi-tenant environments that require network isolation. NAKIVO Backup & Replication can be installed in multi-tenant mode to offer backup as a service and disaster recovery as a service. MSP clients can then back up their data securely without impacting other clients. Download the Free Edition of NAKIVO Backup & Replication from the official website and try the solution. | <urn:uuid:a5607ba0-8c86-405a-beb9-f463d105ed05> | CC-MAIN-2022-40 | https://www.nakivo.com/blog/vxlan-vmware-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00777.warc.gz | en | 0.875195 | 5,497 | 3.609375 | 4 |
Nour Rteil | Feb. 8, 2021, 1:09 p.m.
Power Usage Effectiveness (PUE) is a widely used metric by data centres as a proxy for energy efficiency. Though PUE could indicate the power and cooling infrastructure overhead compared to IT, it does not provide a reliable overall picture of efficiency as it fails to capture IT efficiency and energy proportionality. This report by Uptime Institute perfectly explains the downside of relying solely on this metric as an efficiency indicator. Instead of focusing on a single metric, it is important to assess the environmental sustainability of a data center holistically. The Climate Neutral Data Centre Pact in the EU highlights the importance of Governance and these 5 key areas when measuring the sustainability of a data centre: • Area 1: Energy efficiency, which can be divided into - Efficiencies in the data centre’s infrastructure (power and cooling) - Efficiencies in the data centre’s servers • Area 2: Clean energy • Area 3: Water efficiency • Area 4: Circular economy • Area 5: Circular energy system
Some Hyperscale data centres are making remarkable commitments to sustainable objectives. Here are some of the recent targets and advancements grouped by the sustainability areas mentioned above.
• Efficiencies in the data centre’s infrastructure (power and cooling)
Hyperscale data centres are noticeably minimizing the inefficiencies in their power and cooling facilities. Today, Google’s average PUE is 1.11 across all their large-scale data centres. Google was able to achieve this low overhead by following these 5 best practices: continuously measuring PUE, effectively managing the air flow, adjusting thermostat, utilising free cooling, and optimizing power distribution (UPSs and batteries).
• Efficiencies in the data centre’s servers
The results of a study by 451 Research show that AWS’s infrastructure is 3.6 times more energy efficient than the average U.S. enterprise data centre. AWS claims that more than two-thirds of this advantage is attributed to using more energy efficient servers and much higher server utilisation. Google also stated that they consume 50% less energy than average data centres and this is attributed to building their custom servers that produce more operations per Watt. Facebook has also made considerable advancements in server efficiency by using custom built Open Compute Project (OCP) servers.
• Clean energy
Google has a long record of using clean energy. In 2007, Google became carbon neutral and matched 100% of its electricity consumption with renewable energy. Recently, Google made a commitment to operate on Carbon-free energy, 24/7, in all regions, by 2030. Google isn’t the only hyper scale to make pledges in this area, Microsoft has also committed to achieve 100% renewable energy by 2025 and AWS is working on achieving 100% renewable energy for their global infrastructure, as stated in their timeline here.
• Water efficiency
AWS has multiple initiatives to improve their water use efficiency and reduce the use of potable water for cooling data centres. Moreover, Microsoft is committing to be water-positive by 2030. However, there is little information provided by Google on this matter.
• Circular economy
Google has been working on reducing e-waste by following a circular economy approach. By 2017, 18% of Google’s newly deployed servers were remanufactured, and 11% of the components used for machine upgrades were refurbished. In 2017, Google resold over 2.1 million units into the secondary market and diverted 91% of waste from landfills, as reported here. They are committed to achieving zero waste by reducing the amount of waste generated and finding better disposal options. Microsoft has also committed to be zero-waste by 2030.
• Circular energy system
New heat reuse projects and initiatives are being developed by some data centres, where applicable. AWS is supporting Ireland in meeting its 2030 renewable energy targets through the new District Heating Scheme in Tallaght, South Dublin. Facebook also showcased a new heat-recovery system in its Odense, Denmark data centre, as mentioned in their 2019 sustainability report.
KPIs targeting inefficiencies in the data centre’s infrastructure are being widely adapted by the industry, whilst KPIs targeting other areas are still lagging. The low hanging fruit has been picked in terms of infrastructure efficiency as measured through PUE due to the laws of diminishing returns as explained here. It is crucial for the industry today to consider investing in other areas to improve their overall sustainability. Targeting inefficiencies in servers for example, will improve the overall data centre’s energy efficiency tremendously, especially that servers are the main energy consumers in data centres. Data centres are already beginning to target inefficiencies in this area by powering off idle servers, and consolidating servers to increase their utilisation rate, at the expense of reducing availability and redundancy. But there are more ways to target server inefficiencies, like labelling and upgrading inefficient servers that execute low operations per Watt and optimizing the Software that these servers are running and operating on.
Interact can help data centres focus on the inefficiencies of their servers by: • Analysing and identifying the least efficient servers in their site based on operations per Watt. This figure is estimated based on the server’s hardware configuration including RAM, CPU specs, and server’s release year. • Suggesting hardware refresh scenarios that will reduce cost while improving the site’s overall energy consumption and reducing its Carbon footprint, taking into consideration scope 3 Carbon emissions. This promotes server reuse, reduces e-waste, and eliminates the embodied environmental cost of manufacturing and transporting new servers. | <urn:uuid:cc32978c-1ff5-47ea-b766-83f7fb0da180> | CC-MAIN-2022-40 | https://interactdc.com/news/details/8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00777.warc.gz | en | 0.939511 | 1,165 | 2.859375 | 3 |
To save time and deliver better outcomes in emergencies, agencies must use solutions that allow them to better coordinate incident response and share real-time data and communications among multiple responder teams or agencies.
Last year was unlike anything many in government have ever experienced. From the pandemic and civil unrest to a marked increase in natural disasters, local and state governments had more than their fill of crises to address.
Now, as they face budget cuts heading into a new year, local governments are looking for ways to keep their constituents safe, while also making the most of their resources. To do this effectively, agencies should invest in solutions that encourage collaboration across different departments and stakeholders and ensure resources and information are shared.
The problem of communication silos
This past year, more than most, has shown that public safety, emergency management and health departments must all be able to collaborate to handle emergencies, disseminate the right information to residents and ensure the safety of their communities. This collaboration has not only been important for sharing data between agencies, but also for keeping residents updated on the state of the coronavirus, storm or civil unrest in their towns and ensuring that everyone is on the same page.
However, this information sharing is not always taking place. In fact, when it comes to everyday emergencies, such as fires, medical incidents or acts of violence, information and communications are often siloed inside departments. This can lead to slower response times, a waste of resources and delayed or inconsistent communications among responding agencies.
A prime example of the consequences of not having collaboration tools in place is the 2018 Parkland, Fla., school shooting. According to information we know now, multiple agencies responded but could not communicate with each other effectively -- or, in some cases, at all. Officials were slow to respond to the 911 calls made from the school. It took more than 20 minutes for law enforcement to access school video to see what the shooter looked like, giving him time to flee the school and pose a threat to the greater community. However, Parkland is not unique; many emergency incidents involve the same obstacles when it comes to collaborating and communicating across stakeholders.
The solution of collaborative safety ecosystems
To better prepare, not only for what’s to come during the remainder of the pandemic, but also to better manage day-to-day emergencies, public safety, emergency management and other stakeholders must turn to technology that makes collaborating streamlined and efficient and enables agencies to share resources. This will allow all stakeholders in the community to coordinate preparedness and response for both planned activities and unplanned emergencies, as well as bring order and clarity to the critical early minutes and hours of an event.
The reality is emergencies are often chaotic and fast moving. For all agencies involved, events can often unfold quickly, so the ability to understand and easily coordinate the role every stakeholder plays in a response can save time and lead to better outcomes. To do this, agencies must use solutions that allow them to better coordinate incident response and share real-time data and communications among multiple responder teams or agencies. The right technology will not only coordinate incident response with task management, activity status, reminders and reference resources, but it will also accelerate the response, allowing for those involved to return to safety quicker.
When evaluating technologies for collaboration, communities should think through the capabilities they need and ask how these can be shared across departments. For example, does the technology integrate with existing communications tools and templates and allow multiple departments access to those tools? Can notifications to key stakeholders be automated or quickly updated on the fly?
Additionally, public safety departments should think of the compliance and audit requirements needed after an emergency has been resolved. Does the tool record all actions on a timeline for audits and after-action reporting? Are there task lists and protocols outlined in the tool that everyone can follow?
From regulatory compliance and daily COVID-19 protocols to severe weather response, the ability to immediately notify stakeholders, establish clear responsibilities and deliver direction for decision-making is key to providing and restoring a safe and secure environment. For example, think about if a fire occurs at a school. Not only do fire, police and emergency medical teams need to be aware, but so does the local department of education. With COVID restrictions in place, it’s likely the department of health may need to be notified as well. These different players will each have tasks they need accomplish or information to communicate, and having one collaboration platform can ensure that everyone knows that parents have been contacted when all students are safe and when the all-clear has been given so that the school day can resume.
While our lives have been dominated by the pandemic this past year, it’s important to remember that there are many other types of emergencies that require collaboration in the minutes and hours immediately after an incident has been triggered. To better prepare for these events, departments should work together to share information and resources to ensure the best response and protection for their communities. Technologies like tactical incident management that can guide actions, support on-the-fly changes and escalate past-due tasks to the appropriate personnel will become essential in ensuring safe, effective emergency response in 2021 and beyond.
NEXT STORY: All-hands efforts to marshal vaccine data | <urn:uuid:7daf7b6d-639a-456b-83cc-3f3c8db44987> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2021/02/creating-a-culture-of-collaboration-in-public-safety/315891/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00177.warc.gz | en | 0.957801 | 1,061 | 2.84375 | 3 |
Defense Advanced Research Projects Agency wants a COVID sensor that can detect the virus in the air accurately and quickly enough to stop its spread in office buildings or schools.
With COVID-19 primarily spreading through the air in enclosed spaces, the Defense Advanced Research Projects Agency wants a way to identify SARSCoV-2 signatures indoors and use that data to build a prototype sensor that can accurately detect the virus in the air quickly enough to stop its spread in office buildings or schools.
The small and variable characteristics of the virus combined with complex indoor environments make using a single detection and measurement technique extraordinarily difficult. Current COVID detection requires capturing a sample and sending it off to a lab for genetic analysis – a process that can take days. Optical environmental sensors, which can offer fast detection times, are not always able to discriminate between benign and pathogenic material.
“Current methods are not suitable for room-sized, indoor environmental monitoring,” DARPA said in a presoliciation. They “lack practical combinations of sensitivity, specificity (precision and recall), acceptable false positive rates, and speed and/or have substantial barriers to scaling due to cost or size, weight, and/or power requirements.”
The SenSARS program aims to overcome these existing challenges to environmental monitoring. DARPA suggested that recent developments in radiofrequency vibrometry, sensors, mass spectrometric techniques, immunosensing, electrochemical detection and machine-learning-powered signal analysis might help detect low concentrations of the virus.
DARPA is primarily interested in three use cases: detecting the virus in a 50-cubic-meter office, similar detection in a 300-cubic-meter conference room or classroom, and central monitoring of HVAC systems in buildings up to 10 stories. Solutions must have size, weight and power requirements suitable to the use case, and they must also inexpensive and easy to use and maintain.
Of secondary importance, DARPA said, is the ability to detect other pathogens within a month of discovery and the capacity to save samples for additional analysis.
The two-phase program is expected to produce three working prototype sensors in 18 months.
Proposals are due Dec. 1. | <urn:uuid:04df9125-d6e2-4418-b105-7fd7a1add65b> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2020/11/darpas-plan-for-an-airborne-covid-detector/315438/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00177.warc.gz | en | 0.923966 | 451 | 3.078125 | 3 |
In addition to simple color indexes, all drawing routines can also take a color stype, which is a special string value that allows for more complex fills and shapes. The valid types are:
"styled" - Use the style specified with GD setStyle(). A style is a sequence of colors to be used when drawing lines. It is only valid for line-drawing routines, and is used to make dashed lines.
"brushed" - Use the brush specified with GD setBrush(). A brush is another GD image which is drawn instead of a regular pixel. Using transparent colors, it is possible to create a brush of any size.
"styledBrushed" - A combination of both "styled" and "brushed". The brush is used, but is only drawn when non-transparent pixels are encountered in the style.
"tiled" - Use the tile specified with GD setTile(). This style can only be used with fill routines. It uses the current tile, which can be any GD image, and fills the region with that tile, laying the images side-by-side sequentially.
GD object instance methods | <urn:uuid:ca92ee66-4f2e-4a05-a899-549b948a96c9> | CC-MAIN-2022-40 | http://brent-noorda.com/nombas/us/devspace/manual/c/html/TH_1021.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00177.warc.gz | en | 0.801899 | 239 | 2.71875 | 3 |
"What is caching data? and "how does a cache work? are common questions in computing. Caching Data is a process that stores multiple copies of data or files in a temporary storage location—or cache—so they can be accessed faster. It saves data for software applications, servers, and web browsers, which ensures users need not download information every time they access a website or application to speed up site loading.
Cached data typically includes multimedia such as images, files, and scripts, which are automatically stored on a device the first time a user opens an application or visits a website. This is used to quickly load the application or website’s information every time the user subsequently opens or visits it. Caching is a good solution for the von Neumann bottleneck, which looks at ways to better serve faster memory access.
How Does Caching Work?
Cached data works by storing data for re-access in a device’s memory. The data is stored high up in a computer’s memory just below the central processing unit (CPU). It is stored in a few layers, with the primary cache level built into a device’s microprocessor chip, then two more secondary levels that feed the primary level. This data is stored until it's time to live (TTL), which indicates how long content needs to be cached for, expires or the device’s disk or hard drive cache fills up.
Data is typically cached in two ways, through browser or memory caching or through CDNs.
- Browser and memory caching: Memory caches store data locally on the computer that an application or browser runs on. When the browser is active, the resources it retrieves are stored in its random access memory (RAM) or its hard drive. The next time the resources are needed to load a webpage, the browser pulls them from the cache rather than a remote server, which makes it quicker to retrieve resources and load the page.
- CDNs: Caching is one job of a CDN, which stores data in geographically distributed locations to reduce load times, handle vast amounts of traffic, and protect against cyberattacks. Browser requests get routed to a local CDN, which shortens the distance that response data travels and transfers resources faster.
The Benefits of Caching: How Do Caches Work in a Browser?
When a user visits a new website, their browser needs to download data to load and display the content on the page. To speed up this process for a user's future visit, browsers cache the content on the page and save a copy of it on the device hard drive. As a result, the next time the user goes to that website, the content is already stored on their device and the page will load faster.
Cache memory offers extremely low latency, which means it can be accessed quickly. As a result, it speeds up loading the second time a user accesses an application or website. However, a cache cannot store a lot of memory, so it only stores small files like images and web text.
Data can be cached in many ways, but it is typically reliant on the website’s owner to set a "header," which tells a device that data can be cached and for how long. This instructs a user’s browser what information to download and where to store the temporary files. The user can then create policies and preferences around what data they cache and even clear their whole cache to reduce the amount of data stored on their device.
What is Caching Data Useful For?
Caching data is important because it helps speed up application performance and increase efficiency. It stores data locally, which means browsers and websites will load faster because access elements such as homepage images have previously been downloaded.
Internet users will typically leave a website that loads too slowly, which makes caching vital for website owners to improve user experience and encourage people to use their site. It is also important for online tools like Google Docs, which enable users to access and save their documents online.
However, there are downsides to caching data. Caching can improve browser performance, but it also risks users’ confidential or sensitive information being exposed to cyber criminals. Caching data could result in authentication data, browsing history, or session tokens being vulnerable, especially if a browser is left open or if another user has access to it.
How to Clear Cached Data
A cache can be cleared in different ways depending on the device being used. Cached data can be cleared across all web browsers using the below processes:
- Apple Safari: Open Safari and select the "History" option, then "Clear History" to remove all the data saved on the browser. It will then load a drop-down menu that enables a choice of data from the last hour, last day, last two days, or the user’s entire history, which will delete their entire browsing history, all their cookies, and their entire browser cache. Safari users can also select individual sites on their history, right-click them, and delete.
- Google Chrome: Open Chrome and select the Settings icon, represented by the three vertical dots in the top-right corner of the browser. Select the "More tools" option, then "Clear browsing data." On the next page, select the checkboxes for cached images and files, cookies, and site and plug-in data. Then use the options to choose how much data to delete, from the past day through to "the beginning of time." With that done, select "Clear browsing data." This process works for Chrome on computers and on Android and iOS devices.
- Internet Explorer: Open Explorer and select "Tools" in the gear Icon, then select "Safety" followed by "Delete browsing history." Select the data to be cleared by ensuring all of the relevant boxes, including Cookies and Temporary Internet Files, are checked, then select "Delete." The browsing history menu can also be opened by holding Ctrl, Shift, and Delete at the same time.
- Mozilla Firefox: Open Firefox and click the Library button, then select "History" followed by "Clear Recent History." Select the time range of cached data to clear, then click the arrow positioned next to "Details" to select the information that will be cleared. Select "Clear Now" to clear the cache.
Cached data can also be cleared on mobile devices by deleting the data stored by apps. Mobile cached data can be cleared using these processes:
- Android devices: Android users can clear the cache on their device to free up storage space. Open the Settings menu, then open "Apps" or "Applications," find the application to clear the cache or data of and select "Storage." This will show the amount of storage being used by the application and provide the option to clear the data.
- Apple iOS devices: Apple users can also clear the cache on their device to delete data that eats up storage space. Open the Settings menu, then the "General" option. Within that, go to "Storage & iCloud Usage," then open "Storage" and select "Manage Storage." Select an application in the list, then go to "Documents & Data." If the app is using more than 500 MB of space, then reinstall it to clear space.
Should you Clear Your Cache?
Clearing cached data deletes information stored in the CPU cache. This can be helpful if a user is running low on storage on their device or if they have information stored for websites they no longer use.
Clearing the cache can also correct an incorrectly loading page but slow down page load times of previously visited websites. It results in every website loading as if the user has never visited it before and could delete stored data, website logins, and more, so users must be careful about what they delete before going ahead with clearing cached data.
How Fortinet Can Help
FortiGate next-generation firewall (NGFW), can help identify and block attacks that occur on a network. This can be useful for protecting the cached data stored on users’ devices by blocking attackers from gaining access.
The Fortinet NGFW solutions update as the threat landscape evolves, which ensures that businesses are always protected against the latest attack vectors and malware strains. It also integrates with other Fortinet solutions like FortiGuard and FortiSandbox, which keeps businesses safe from known and zero-day threats.
What is cache?
A simple cache definition is a temporary storage location that stores data, files, and login details for applications and websites on a device’s memory.
What does it mean to clear your cache?
Clearing the cache is the process of a user deleting data and files stored within their cache folder.
Is it okay to delete the cache?
Yes, deleting the cache frequently can help users clear up storage space on their device. However, users must be careful about the data they delete. Clearing the cache may slow down page load times and remove important data and required website logins.
What is caching used for?
A cache stores data in a local folder on a device. This can increase application or website performance and efficiency by speeding up load times the next time a user opens or visits an application or site. | <urn:uuid:ab428a40-75dd-45e9-92b5-e595a6074846> | CC-MAIN-2022-40 | https://www.fortinet.com/kr/resources/cyberglossary/what-is-caching | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00177.warc.gz | en | 0.912452 | 1,976 | 4.3125 | 4 |
Continuing our exploration of the Internet of Things (IoT) ecosystem, we will now move from discussing short range communications to long range communications – where longer distances must be covered, from dense urban areas to isolated rural areas, and an increased number of devices can be static but also in-motion.
Low Power Wide Area Network (LPWAN) is a type of wide-area network wireless communication designed to allow long-range communications that can be used to connect low-power IoT devices onto either a private or commercial wireless network or a third-party cloud-based infrastructure.
When it comes to connecting IOT devices for long range communications, we can distinguish between two primary technologies: Non-cellular and cellular.
1. Non-Cellular Networks
Long Range (LoRa) is a non-cellular, unlicensed radio technology used to deploy private/public LPWAN networks. LoRa refers to the modulation technique (physical layer), while LoRaWAN is one of several protocols used to manage network communications (network layer). Devices require a LPWAN gateway to communicate.
Sigfox is a global public wireless network available in more than 70 countries, allowing the deployment of IoT devices without the need to build complex private LPWAN infrastructures. Devices require a compatible radio module (not compatible with LoRa) and a subscription plan, but do not require a gateway to communicate.
2. Cellular Networks
With 5G networks and 5G technology becoming more and more prevalent, the legacy 3G/4G networks starting to be phased out in most countries as we speak. New mobile technologies, also referred to as Mobile IoT, have been developed to address the specific needs of IoT and offer global connectivity: NB-IoT and LTE-M.
2.1 Narrowband Internet of Things (NB-IoT)
NB-IOT is a LPWAN radio technology deployed over mobile networks, offering great indoor coverage for a high number of devices, using low-cost radio modules and ensuring long
battery life. It is especially suited for areas where GSM is the standard cellular technology (Europe, Asia). 4G/LTE coverage is poorer and/or only a small amount of data needs to be sent over.
It is recommended for static devices and does not require the use of a gateway for them to communicate with their back-end server.
2.2 Long Time Evolution for Machines (LTE-M)
LTE-M is another LPWAN radio technology reusing existing 4G/LTE infrastructures, offering high transmission rates (up to 1Mbps) with a low latency as well as voice transmission capacity (using VoLTE). It is especially recommended for mission-critical applications where “real-time” data transfer is required, such as automation (e.g., self-driving cars). LTE-M is a better alternative for moving devices, as no data transmission is lost while in-motion, as well as roaming devices, as subscriptions can be purchased with wireless operators like for smartphone.
In some cases, IoT devices might even use a combination of several technologies, like the BlackBerry Radar asset tracking device. It uses a 2G/3G/4G modem on one hand to connect globally, while static or in motion, anywhere in the world, and to send data back to its application server; but on the other hand, BlackBerry Radar also has an embedded LoRaWAN gateway to connect locally attached devices (e.g., door sensors, humidity/temperature sensors, etc.) and forward their data back to their application servers.
When it comes to IoT communications, there is no “one size fits all” solution. However, you can select technology that covers most of your use-cases. At times you may be operating two IoT networks to cover all your use-cases and an IoT aggregator like ISEC7 SPHERE IoT can help. Please feel free to contact us if you have any questions or would like assistance in reviewing your IoT communications options.
(C) Rémi Frédéric Keusseyan, Global Head of Training, ISEC7 Group | <urn:uuid:4a17c10f-e284-422a-8dd7-1383d54eb000> | CC-MAIN-2022-40 | https://www.isec7.com/2021/03/23/demystifying-technology-iot-communications-long-range/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00378.warc.gz | en | 0.911923 | 882 | 3.390625 | 3 |
Machine learning security
Machine learning security: a hot new topic, but the same old story for many threats. Let's explore the attack surface.
Machine learning (and artificial intelligence in general) is a hot topic, with a very diverse set of use cases. Among other things, it can be used for data mining, natural language processing or self-driving vehicles; it also has cyber security applications such as intrusion detection. In machine learning today we basically do structured deep learning. We apply the good old backpropagation technique and using artificial neural networks (AANs), just as we did decades ago. But this time with much (much!) more processing power.
Despite the large variance of technologies and use cases, all machine learning solutions have one thing in common. Just as any software system, they may be vulnerable in various ways, and so represent a potential target for attackers. Some recent demonstrations of potentially malicious interference include researchers making themselves invisible to surveillance systems, tricking the Tesla autopilot to accelerate past the speed limit, or giving imperceptible commands to speech recognition systems used by personal AI assistants.
Things do not look good: machine learning security is becoming a critical topic. However, many experts and practitioners are not even aware of the attack techniques. Not even those that have been known to the software security community for a long time. Neither do they know about the corresponding best practices. This should change.
An essential cyber security prerequisite is: ‘Know your enemy!’. So, for starters, it’s worthwhile to take a look at what the attackers are going to target in machine learning!
It all starts with the attack surface
The Garbage In, Garbage Out (GIGO) problem is well known in the machine learning world. Since all algorithms use training data to establish and refine their behavior, bad data will result in unexpected behavior. This can happen due to the neural network overfitting or underfitting the model, or due to problems with the dataset. Biased, faulty, or ambiguous training data are of course accidental problems, and there are ways to deal with them. For instance, by using appropriate testing and validation datasets. However, an adversary feeding in such bad input intentionally is a completely different scenario for which we also need special protection approaches.
Simply, we must assume that there will be malicious users: attackers. In our model they don’t have any particular privileges within the system, but they can provide raw input as training data, and can see the system’s output, typically the classification value. This already means that they can send purposefully bad or malicious data to trigger inadvertent machine learning errors (forcing GIGO).
But that’s just the tip of the iceberg…
First of all, attackers are always working towards a goal. To that end, they will target specific aspects of the machine learning solution. By choosing the right input, they can actually do a lot of potential damage to the model, the generated prediction, and even the various bits of code that process this input. Attackers are also smart. They are not restricted to sending static inputs – they can learn how the model works and refine their inputs to adapt the attack. They can even use their own ML system for this! In many scenarios, they can keep doing this over and over until the attack is successful.
All parts of the system where attackers can have direct influence on the input and/or can have access to the output form the attack surface. In case of supervised learning, it encompasses all three major steps of the machine learning workflow:
- For training, an attacker may be able to provide input data.
- For classification, an attacker can provide input data and read the classification result.
- If the ML system has feedback functionality, an attacker may also be able to give false feedback (‘wrong’ for a good classification and ‘correct’ for a bad classification) to confuse the system.
To better understand what an attacker can accomplish at these particular steps, let’s build a threat model.
Machine learning security: (mostly) the same old threats
In all cases, the attacker will want to damage a particular security requirement (the famous CIA triad, namely Confidentiality, Integrity and Availability) of an important system asset. Let’s take a look at these goals by using an ML-based facial recognition system for the examples:
- Disruption (Availability of the entire system): Make the AI/ML system useless for its original purpose and consequently destroy trust in the system – e.g. the system no longer recognizes employees.
- Poisoning (Integrity of the model): Repurpose the system for their own benefit – e.g. the system incorrectly recognizes the attacker as the CEO (you can think of this as spoofing).
- Evasion (Integrity of the entire system): Avoid their own data getting classified (correctly, or at all) – e.g. the system does not recognize the attacker at all.
- Disclosure (Confidentiality of private user data): Steal private data – e.g. the attacker obtains the reference photos of the users that were used to train the system.
- Industrial espionage (Confidentiality of the model): Steal the model itself – e.g. the attacker obtains the exact weights, bias values, and hyperparameters used in the neural network.
Just as in case of software security, in machine learning security we can use the attack tree modeling technique to plot the possible attacks that can be used to realize these goals. On the following figures we have marked the specific attacks with the following colors:
- Blue: AND connection (all child elements need to succeed for the attack to succeed)
- Green: OR connection (at least one of the child elements needs to succeed for the attack to succeed)
- Purple: Expanded in a different attack tree node, as indicated in the box
- Orange: An attack that exploits a weakness in the machine learning process
- Red: An attack that exploits a weakness in the underlying code
- Grey: An attack included for the sake of completeness, but out of scope for software security. Note that in this model we did not include any physical attacks (sensor blinding, for instance).
|Integrity (Software)||Integrity (Data)|
|Machine learning security – attack trees (click on the images to enlarge them)|
Adversarial learning: one dumb cat, lots of smart mice
Many of the attacks described in the previous section make use of so-called adversarial examples. These crafted inputs either exploit the implicit trust an ML system puts in the training data received from the user to damage its security (poisoning) or trick the system into mis-categorizing its input (evasion). No foolproof method exists currently that can automatically detect and filter these examples; even the best solution (adversarial training) is limited in scope. On one hand, ML systems are pretty much like newborn babies that rely entirely on their parents to learn how the world works (including ‘backdoors’ such as fairy tales, or Santa Claus). On the other hand, ML systems are also like old cats with poor eyesight – when a mouse learns how the cat hunts, it can easily avoid being seen and caught.
There are defenses for detecting or mitigating adversarial examples, of course. Many of them however just do some kind of obfuscation of the results to make the attacker’s job harder (some of them even relying on security by obscurity). An intelligent attacker can defeat all of these solutions by producing a set of adversarial examples in an adaptive way. This has been highlighted by several excellent papers over the years (Towards Evaluating the Robustness of Neural Networks, Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, On Evaluating Adversarial Robustness, On Adaptive Attacks to Adversarial Example Defenses). All in all, machine learning security research is still in its early stages. They mostly focused on image recognition thus far; however, some defense techniques that work well for images may not be effective with e.g. text or audio data.
That said, there are plenty of things you can still do to protect yourself in practice. Unfortunately, none of these techniques will protect you completely from malicious activities. All of them will however add to the protection, making the attacks harder. This is following the principle of defense in depth.
- Most importantly, think with the head of an attacker. Train the neural network with adversarial samples to make it explicitly recognize them (or variants of them) as incorrect. It is a good idea to create and use adversarial samples from all currently known attack techniques. A test framework can generate such samples to make the process easier. There are existing security testing tools that can help with this: ML fuzz testers can automatically generate invalid or unexpected input. Some examples are TensorFuzz and DeepTest.
- Limiting the attacker’s capabilities to send adversarial samples is always a good mitigation technique. One can easily achieve this by simply limiting the rate of inputs accepted from one user. Of course, detecting that the same user is behind a set of inputs might not be easy. This is the same challenge as in case of distributed denial of service attacks, and the same solutions might work as well.
- As always in software security, input validation can also help. Of course, it may not be trivial to automatically tell good inputs from bad ones; but it is definitely worth trying.
- As a ‘hair of the dog’ solution, we can use machine learning itself to identify anomalous patterns in input. In the simplest case, if data received from an untrusted user is consistently closer to the classification boundary than to the average, we can flag the data for manual review, or just omit it.
- Applying regular sanity checks with test data can also help. Running the same test dataset against the model upon each retraining cycle can uncover poisoning attack attempts. RONI (Reject On Negative Impact) is a typical defense here, detecting if the system’s capability to classify the test dataset degrades after the retraining.
Machine learning security is software security
We often overlook the most obvious fact about machine learning security: that ML solutions are essentially software systems. We write them in a certain programming language (usually Python, or possibly C++), and thus they potentially carry all common security weaknesses that apply to those languages. Furthermore, do not forget about A9 from the OWASP Top Ten – Using components with known vulnerabilities: any vulnerability in a widespread ML framework such as TensorFlow (or one of its many dependencies) can have far-reaching consequences for all of the applications that use it.
The attackers interact with the ML system by feeding in data through the attack surface. As already mentioned, let’s start to think with the head of the attacker and ask some questions. How does the application process this data? What form does it take? Does the system accept many different types of inputs, such as image, audio and video files, or does it restrict the users to just one of these? If so, how does it check for the right file type? Does the program do any parsing, or does it delegate it entirely to a third-party media library? And after preprocessing the data, does the program have any assumptions (e.g. a certain field must not be empty, or a value in another field must be between 0 and 255)? Is there any (meta)data stored in XML, JSON, or a relational database? If so, what kind of operations does the code perform on this data when it gets processed? Where are the hyperparameters stored, and are they modifiable at runtime? Does the application use third-party libraries, frameworks, middleware, or web service APIs as part of the workflow that handles user input? If so, which ones?
Each of these questions can indicate potential attack targets. Each of them can hide vulnerabilities that attackers can exploit to achieve their original goals, as shown in the red boxes in the attack trees.
These vulnerability types are not related to ML as much as to the underlying technologies: the programming language itself (probably Python), the deployment environment (mobile, desktop, cloud), and the operating system. But the dangers they pose are just as critical as the adversarial examples – successful exploitation can lead to a full compromise of the ML
system. This is not restricted to the code of the application itself, either; see Security Risks in Deep Learning Implementations and Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning for two recent papers that explore vulnerabilities in commonly-used platforms such as TensorFlow and PyTorch, for example.
Threats are real
The main message is: machine learning security covers many real threats. Not only it is a subset of cyber security, but also shares many traits of software security. We should be concerned about malicious samples and adversarial learning, but also about all the common software security weaknesses. Machine learning is software after all.
Machine learning security is a new discipline. Research has just begun, we are just starting to understand the threats, the possible weaknesses, and the vulnerabilities. Nevertheless, machine learning experts can learn a lot from software security. The last couple of decades have taught us lots of lessons there.
Let’s work together on this!
We cover all of the aspects of machine learning security – and much more – in our Machine learning security course. In addition to talking about all of the threats mentioned in this article, our course also discusses the various protection measures (adversarial training and provable defenses) as well as other technologies that can make machine learning more secure to use in a cloud environment – such as fully homomorphic encryption (FHE) and multi-party computation. | <urn:uuid:c2aad3da-a202-4938-9860-39fbd8cc69d7> | CC-MAIN-2022-40 | https://cydrill.com/cyber-security/machine-learning-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00378.warc.gz | en | 0.934363 | 2,883 | 2.859375 | 3 |
There's power in defining constants, even in PowerShell. Check out this week's tip.
- By Jeffery Hicks
In Windows PowerShell, we use variables as a place holder for a value of collection of values. Most of the time we can create and define a variable in one step:
PS C:\> $x=100
If we need to change the value of $x all we need to do is assign it a new value:
PS C:\> $x=200
But what if you were relying on the value of $x and didn't want it to be either intentionally or accidently modified? PowerShell offers a few options when defining your variable. You can make a variable read-only or a constant. In either situation, you need to use the New-Variable cmdlet and the -Option parameter:
PS C:\> New-Variable -Name M -Value "Jeff" -Option ReadOnly
PS C:\> $m
PS C:\> $m="Don"
Cannot overwrite variable M because it is read-only or constant.
At line:1 char:3
+ $m <<<< ="Don"
+ CategoryInfo : WriteError: (M:String) , SessionStateUnauthorizedAccessException
+ FullyQualifiedErrorId : VariableNotWritable
When a variable is read-only, you cannot modify its value. As you've seen, PowerShell will complain. However, you are allowed to delete the variable and re-create it:
PS C:\> remove-variable m -force
PS C:\> $m="Don"
PS C:\> $m
You must use -Force to delete the variable. To create a truly protected variable with New-Variable, use a value of Constant with -Option:
PS C:\> $wshell=new-object -com "wscript.shell"
PS C:\> new-variable vbExclaim 48 -Option Constant
PS C:\> $wshell.popup("That didn't go well.",15,"Oops",$vbExclaim)
It's unlikely that I'll overwrite a variable with a name like vbExclaim. But this value will never, ever change so I might as well define it as a constant.
Be aware that if you already have a variable defined with the same name, you need to delete it before you can declare a read-only or constant variable with the New-Variable cmdlet. Since VBScript is full of constants, I find it helpful to define them as such when using VBScript COM objects in Windows PowerShell.
By the way, you can also do the same thing with aliases:
PS C:\> Set-Alias np $env:WinDir\notepad.exe -option ReadOnly
PS C:\> New-Alias slo Select-Object -Option ReadOnly
If you find the need to use constants throughout the day in your PowerShell work, then defining them in your PowerShell profile is a good idea. As to when to make a variable ReadOnly and when a Constant depends on your environment, your script, who will be running it and the consequences if variables don't have the values you are expecting.
Jeffery Hicks is an IT veteran with over 25 years of experience, much of it spent as an IT infrastructure consultant specializing in Microsoft server technologies with an emphasis in automation and efficiency. He is a multi-year recipient of the Microsoft MVP Award in Windows PowerShell. He works today as an independent author, trainer and consultant. Jeff has written for numerous online sites and print publications, is a contributing editor at Petri.com, and a frequent speaker at technology conferences and user groups. | <urn:uuid:a8ffe148-63fe-4960-9ea2-cff2abb96438> | CC-MAIN-2022-40 | https://mcpmag.com/articles/2011/11/07/constant-companion.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00378.warc.gz | en | 0.85755 | 785 | 2.9375 | 3 |
The public Internet is a complex mesh of almost one hundred thousand different, but ‘open’ networks - linked together with an almost limitless inventory of network hardware and millions of kilometers of fiber-optic cable. In technical terms, these are known as Autonomous Systems (AS). An AS is essentially a unique collection of IP addresses/address blocks and network hardware within a common administrative domain. Autonomous Systems communicate route information and steer traffic to each other using a protocol known as the Border Gateway Protocol, or BGP.
The Internet backbone can be simply defined as the core of the Internet. Here, the largest and fastest networks are linked together with fiber-optic connections and high-performance routers. Internet networks are primarily owned and operated by commercial, educational, government or military entities. Collectively, they facilitate a stable foundation for the Internet Service Providers (ISPs), content and cloud providers who provide Internet access or online content, applications and services to end-users and businesses.
The largest providers are known as Tier 1 networks. Positioned at the top of the Internet ecosystem, these networks are sufficiently comprehensive that they don’t purchase IP Transit from anyone else. Tier 1 networks exchange Internet backbone traffic on the basis of privately negotiated interconnection agreements, usually on the principle of settlement-free IP peering. In general, networks lower down in the hierarchy pay for upstream IP Transit and networks of similar size and merit peer with each other. Arelion (AS1299) is one of the largest and best connected global Tier 1 providers today. As a Tier 1 network owner we can reach every other network on the Internet solely via settlement-free interconnection. Our AS1299 is currently ranked #1 and our IP customers account for nearly 65% of all Internet routes. Whether you're a startup or Fortune 500, our network is the backbone of your business - connecting you directly to everything and everyone that matters.
The Internet is massive and immense, so who is its owner? The answer is no one, i.e., there is no organization, no company nor person that owns the complete Internet. Even control of the Internet is not regulated on a common level, though some governments have been trying to introduce a more controlled Internet for their specific location, but on the global level there is no organized control.
Internet infrastructure is on the other hand owned by some large communication companies.
Whilst Internet connectivity is often viewed as a commodity, performance can vary significantly between suppliers. When selecting an Internet backbone, there are a number of important things to consider:
Reach - a larger footprint generally means a service provider has greater control of network resources, and ultimately, quality.
Scalability - is a backbone built on leased capacity or own infrastructure? This will dictate the ability of a supplier to scale-up capacity, quickly and efficiently.
Proximity - How well connected is a backbone with the rest of the Internet and in what tier do they reside?
Connectivity – Does a backbone connect via third party transit networks and public exchanges or through a well-managed ecosystem of private peering connections with critical networks?
We use our own optical fiber backbone that spans thousands of kilometers across the world. We control, operate, and monitor all our routes to ensure cable stretches and exact fiber routings are thoroughly checked, documented, and maintained. We apply a rigorous process to all fiber routes, including fast, in-field action for fiber repairs to maintain high fiber availability.
Internet backbone maps depict the connections between different points of presence (PoP), making it easier for customer to see and define the shortest way to connect keeping the best possible quality and cost.
See Arelion’s Network maps as a good example.
Serving customers in 125 countries, our 70,000 km fiber backbone spans North America, Europe and Asia. Our PoPs give you a direct route to the world’s best content and billions of end-users. Fiber-up control of our network with cutting-edge optical and IP technology deliver the scalability you need, whenever you need it.
Direct access to your cloud with a dedicated connection to the major Cloud Service Providers, AWS, Google, Azure, Oracle, and IBM.
BGP Transit to Internet content and billions of end-users, with the world’s best-connected backbone, AS1299.
Expert knowledge and invaluable insights to help you navigate your digital journey.
Check out our expert hosted webinars diving deep into the latest topics within connectivity.
The world of networking has never been more exciting. Today, the Internet and network services play a critical role in our lives - individuals and businesses alike.
Our thoughts and deeds. From industry trends to geeky networks stuff. | <urn:uuid:462b82ae-53a9-4275-bbe3-59973cc0a7e2> | CC-MAIN-2022-40 | https://www.arelion.com/knowledge-hub/what-is-guides/what-is-the-internet-backbone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00378.warc.gz | en | 0.925472 | 968 | 3.015625 | 3 |
In units of time, we’re not far removed from the world of paper and ink. But if we measure the distance using terms of productivity, flexibility and value, today’s geospatial business environment is light-years away from its former location.
We regard electronic data and communications as an essential for modern business. But these technologies are still young and growing. Not so long ago, typical project deliverables consisted of hardcopy drawings and reports. Even when using computer-aided data collection, processing and drafting, results often came as 2D drawings accompanied by written analyses. Over the years, clients and geospatial professionals recognised that the information contained in a 2D drawing could be put to work more quickly if it arrived as a computer file. As a result, many deliverables moved to electronic formats. CAD files, spreadsheets and reports have become the norm in most enterprises.
Today many organisations employ information technology that enables stakeholders to easily share deliverables. Common data formats and visualisation tools enable downstream users such as engineers, architects and planners to utilise the information. For example, in the days of hardcopy it would be unusual for a surveyor or engineer to develop a physical 3D model of a building site. If such a model were needed, architects would generally build it as part of the building’s design and approval process. By using modern instruments and software, geospatial professionals can create digital 3D models in a fraction of the time and cost. And instead of physically moving drawings and computer storage media, deliverables now arrive via 1990s technology: The internet.
The value of cloud
The internet now plays a core function in the operation of most geospatial businesses. Organisations use the internet to communicate with their clients, contractors and employees via email, Intranets and social media. Proposals can be delivered, contracts negotiated and results conveyed electronically, shortening up-front processes and producing tighter feedback loops.
Cloud solutions build on the internet’s foundation of connectivity and interaction. In addition to moving information, remote servers can provide powerful computing capabilities. By tying handheld and desktop computers to cloud services, it’s possible to bring sophisticated data processing to more users and locations. As a result, cloud-based systems for geospatial information management and analysis are poised to provide new flexibility in enterprise operations.
While these trends have produced significant gains in productivity, the geospatial industry has yet to realise the full value of the cloud. Let’s look at some examples.
Real-time GNSS positioning—precision from the cloud
Many geospatial users gained their first experience with cloud solutions by using groups of interconnected global navigation satellite system (GNSS) receivers known as real-time GNSS networks (RTN). Using RTN, networks of GNSS reference stations streamed data to a powerful server where the information could be merged and analysed. Then customised data streams could be sent to individual GNSS rovers for use in RTK positioning. Freed from the need for a reference station, surveyors could work quickly and freely over large geographic areas.
The speed, ease and flexibility of the RTN technology helped fuel a dramatic increase in the use of real-time GNSS positioning. Today, cloud-based positioning services support applications in surveying and engineering, construction, agriculture and more.
For example, structural or geotechnical monitoring solutions utilise cloud positioning and Web interfaces to deliver critical real-time information to stakeholders in remote locations.
Cloud solutions for geospatial enterprises
There are now cloud-based platforms of software, data and services to serve the geospatial community. Focused on applications in surveying, engineering and GIS, the solution uses the cloud to support work in geospatial data management, field data collection and transfer, equipment management and spatial data catalogues. By combining cloud services with technologies for positioning, communications and data analysis, companies can leverage point-of-work delivery of information needed by geospatial professionals in the field and office.
As an illustration, a surveyor typically needs multiple types of data to plan and execute a project. This includes previous surveys, government data, maps and other information. Some of the data may be held in the surveyor’s own records, while other information may come from government agencies and private suppliers such as Digital Globe or Intermap. There are now tool providers that enable the surveyor to quickly discover and use the geospatial information specific to the project. Rather than manual searching through multiple information sources and formats, users can streamline the most common and time-consuming pre-survey tasks. The surveyor can quickly find the pertinent information and download it to his or her desktop.
On the job site, field crews can use cloud-based applications and hosting services to exchange information to simplify workflows and data management. Additional services include the ability to track and manage the location and status of field equipment, including warranty information, software and firmware. Software offerings now allow users to customise workflows to streamline data collection on multiple platforms including iOS, Android and Windows. The system can automatically sync field information from multiple crews to a central server.
The shared foundation
The accurate and feature-rich data that geospatial professionals collect can be very valuable for other purposes. The cloud offers an ideal platform for individuals and organisations to exchange or sell their data as they feel appropriate for their business.
Data is the cornerstone of any geospatial workflow. By enabling professionals to easily discover, access and utilise different types of data, the cloud will soon become an essential part of the daily processes of data collection, processing, modelling and analysis.
Ron Bisio, vice president, geospatial at Trimble (opens in new tab)
Image source: Shutterstock/Omelchenko | <urn:uuid:79d8077a-03af-437d-b670-c04f34c65902> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-cloud-rolls-in/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00578.warc.gz | en | 0.913967 | 1,189 | 2.5625 | 3 |
Identities are one of the most – if not the most – sought after digital assets today, with corporate identities being the most valuable. A hacker that gains access to the right credentials potentially has the keys to the kingdom, which is why the right corporate credentials are so valuable. The following number of techniques show how hackers steal these credentials.
Four Ways Hackers Can Steal Your Company’s Crown Jewels
Social Engineering: Sometimes they go directly by setting up phishing scams or through social engineering. Usually the people they are after are aware of these threats, so it is not always easy. However, the attackers have become more sophisticated and are going to great lengths to gain access to accounts and mimic the behavior of those users to gain further access or even to have third parties take action on their behalf.
Hacking Employee-Used Sites: Another method is to hack a different site that employees use that is easier to infiltrate. Once those usernames and passwords have been compromised, the hackers backtrack by using those same credentials for other services and even logins to corporate devices and applications. A number of companies have been compromised in this manner.
Compromising Partners and Vendors: Still another method that hackers use is to go through an organization’s partners and vendors. Often, an organization grants access to third parties. With more outsourcing and the network being a central part of an organization’s operations, more people have privileged access than ever before. The organization itself may be secure, but a compromise on their downstream partners or vendors could render them breached.
Traditional Techniques: Of course there are the old standby methods of brute force attacks, dictionary attacks, and automated attacks. All of these effectively are knocking on open ports and logins to devices to see if they can get lucky with the right usernames and password combination. While usually unsuccessful, they do sometimes find networks, servers, or ports that are still using default or commonly used credentials.
The number of different attack vectors is only growing. The question becomes, “How do organizations secure their identities?”. While it is never easy nor foolproof, there are some concrete steps that organizations can take. First, there should be a significantly higher bar for passwords. Research has shown that having longer passwords is better. Getting fancy with all of the different characters is not nearly as important as length. The second step is to use SSH keys whenever possible. Keys are a much stronger form of credentials compared to usernames and passwords. The third step is to enable multi-factor authentication wherever you can. If you use Google Apps (now known as G Suite), turn on MFA for all of your users. If you use AWS, turn it on for your root account. Adding MFA can be the difference between losing your business and keeping it. The fourth step is to monitor access to your various systems. If you see something suspicious, you need to follow-up on it. Of course, auditing your user logins takes time and tools, but it is well worth it.
Protect Your Corporate Identities
Your corporate identities are at risk. They are constantly being attacked and are the number one target within your organization. If you would like to discuss how to secure your identities, drop us a note. It is a core part of our thinking around why Directory-as-a-Service is the next generation model for directory services. | <urn:uuid:2c0e1480-2128-474f-b99d-6884bd773faa> | CC-MAIN-2022-40 | https://jumpcloud.com/blog/how-hackers-steal | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00578.warc.gz | en | 0.956502 | 690 | 2.625 | 3 |
As we enter a world of machine learning and data science, are there any gotchas or negatives? It sounds as if it is all sunshine and rainbows, but, as the title to this post alludes, I believe there are. Here are some of the dangers I thought of or came across while researching this post:
- Normal does not equal good.
- Beware statistical bias.
- We stop thinking and just believe the model is right.
- Statistical models can suffer from the “boiling frog” syndrome.
Let’s take a closer look and see how each one could ultimately work against us.
Normal Does Not Equal Good.
Regardless of which statistical model is used for forecasting, all of them use a series of data (whatever you want to predict). But just because it is what we observe, it may not be what we want or determine to be “good.” For example, say management wants to be made aware of any anomalies in worker behavior. OK, we decide to model some data, which includes logging in and out. The employee who always comes in late to work and continues to do so will raise no alarms, but I’m fairly certain that’s not the desired behavior.
Beware Statistical Bias.
There are many types of statistical biases that can affect forecasting accuracy. Statistical bias simply means that the sampling or set of data you are using to forecast with is not representative of the general population of data. In other words, your datapoints are biased in some way. One of the more recent examples of sample bias for a survey was in the Democratic primary election. Hillary Clinton was supposed to run away with Michigan, with polls predicting Bernie Sanders had less than a 1% chance of winning1. The sampled data was incorrect in a big way, partially due to sampling bias—the survey called landline phones that the younger generation is much less likely to have.
We Trust Too Much in the Model.
Keeping things mathematical attempts to take personal bias out of the equation. However, just because it always was doesn’t mean it always will be as models would predict. In other words, we have intuition and reasoning skills that take into account other predictive data than “what were the datapoints previously.” Ignore these skills and intuition at your peril. By the time most models take large swings or adjustments into account, it could be too late to react. A good example of this would be predicting storage needs for our databases. When we trend historical data growth, it doesn’t take into account the huge data migration onto our platform in a week. When it happens, the model will let you know it was anomalous behavior, but that doesn’t remedy the downtime due to running out of capacity.
The ‘Boiling Frog’ Syndrome.
The anecdote goes: If you put a frog into a pot of boiling water, it will jump right out to save itself. On the other hand, if you put a frog into a pot of room temperature water and slowly turn up the heat, the frog will not notice until it is too late. I have not personally tried this experiment. Let’s use storage capacity as the example again. You monitor and forecast storage needs over time and want to get notified of anomalies (large increases or drops in storage usage in the next polling). If you have your warnings set to one standard deviation from the norm, the upward trend in storage consumption can reach the max storage capacity without ever raising the alarm for an anomaly due to a gradual increase.
Insert Humanness Into the Equation
Machine learning and data science can offer a lot to businesses wanting to do more with the data they have. However, be careful of potential pitfalls when using statistical modeling for predictive purposes. If you are using predictions to alert you to anomalies, you might also want to consider some hard-stop limits as well. Surveys used to obtain datapoints can fall victim to statistical bias. Bias can come in many forms. Words matter. As always, don’t forget or hesitate to insert your humanness into the equation.1“Why the Polls Missed Bernie Sanders’s Michigan Upset,” June 25, 2018; https://fivethirtyeight.com/features/why-the-polls-missed-bernie-sanders-michigan-upset | <urn:uuid:82fdddc9-9c7c-4a77-b92f-ac0f8eb002ff> | CC-MAIN-2022-40 | https://www.dbta.com/Columns/Next-Gen-Data-Management/NEXT-GEN-DATA-MANAGEMENT---Dangers-of-Statistical-Modeling-127922.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00578.warc.gz | en | 0.947412 | 909 | 2.890625 | 3 |
Business Risk Identification Methods & Techniques: Identifying The Root Cause
Table of Contents
Risk identification can be defined as the process of determining which risks are relevant to your organization. By implementing best-practice risk identification techniques, your risk management plan can better prevent risks from materializing.
What is the Risk Identification Process?
You can begin to identify risks in many different ways, but the best way to begin the risk identification process is by taking a “root cause” approach.
Simply put, identifying risks and their root causes is essential in understanding the fundamental reason that an event occurs. Understanding the root cause, and not just the symptoms, allows you to design key risk mitigation strategies that neutralize risks and prevent them from re-emerging in the future.
Standardization is key when you’re identifying risks, and having a risk library allows different business units to communicate in a uniform fashion to facilitate your ability to identify risks and prioritize based on criticality.
Risk Identification: Techniques & Methods
When multiple business areas identify the same issue, systemic risks and their upstream and downstream dependencies can more easily be identified and mitigated.
The root cause method also identifies areas that would benefit from centralized controls, which eliminates the extra work of maintaining separate activity-level controls.
Technique #1: Identifying Root Cause
Centralized controls are extremely important from an efficiency standpoint; the more you can accomplish with a set number of controls (rather than designing a larger number of unique controls), the fewer tests and metrics you’ll need to run and collect, respectively. Identification of the root cause of a risk provides information about what triggers a loss and where an organization is vulnerable. Using root source categories provides meaningful feedback: What steps should be taken to most effectively mitigate risk in your GRC program? Risk identification based simply on the effect or outcome often leads to ineffective risk mitigation activities.
Risk mitigation activities should be aimed at the root cause and will differ depending on the source of the risk. For example, in order to prevent a headache, you must know why you have one; if illness is the cause, seeing a doctor for treatment or a medication prescription is the appropriate mitigation activity. However, if the headache is being caused by a lack of sleep, going to bed earlier is a much more efficient and effective mitigation strategy than visiting a doctor. You may also mitigate a headache by taking a painkiller. This will make the headache go away, but it will not prevent future headaches because it does not target the root of the problem.
Armed with the knowledge of the source of a risk, we can proactively manage risk and avoid future risk events. In this simple example, it’s easy to see why creating controls based on the risk event/outcome (not the root cause) can lead to ineffective mitigation activities.
Another great option for identifying risks involves creating a systemized approach for completeing assessments of potential risks within your business.
Create a risk management framework that you can use to identify, track and monitor risks all in one place. By knowing what a risk assessment matrix is and utilizing it will give you a place store both quantitative and qualitative risk analyses.
If you are just getting started and need a quick method for identifying risks, then our free risk assessment template is a great place to start (before moving onto a more advanced platform such as LogicManager).
LogicManager provides organizations with a pre-built root cause risk library in our comprehensive risk assessment software. This library is entirely flexible, allowing organizations to use the risk identification techniques or risk identification methods best suited to their organization.
LogicManager’s complete root cause library also includes best practice compliance and performance-balanced scorecard indicators. You can add to your library over time while receiving updates on emerging risks or new standards.
To learn more about our risk library, including our identification and assessment tools, click here.
Manage Tomorrow’s Risks Today Using LogicManager’s Enterprise Risk Management Software
Book a free demo to see how our software can protect and reduce negative impacts against your business.
Submit your Favorites List and our experts will reach out to you with more information. You will also receive this list as an e-mail which you can share with others. Here are the solutions you've added to your list so far: | <urn:uuid:4b791698-0973-40ee-9897-6cde20ad8a7d> | CC-MAIN-2022-40 | https://www.logicmanager.com/resources/erm/risk-identification/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00578.warc.gz | en | 0.923665 | 887 | 2.734375 | 3 |
Proving ownership of IPv6 addresses
Throughout the history of the internet, there has been a question of how to verify that the senders of messages are who they claim to be. Operating under a false IP address opens the way to a wide variety of attacks and can be very difficult to detect. Nowadays, widespread filtering makes this impossible across network boundaries, but there is seldom any such protection within a single network, making public networks especially a difficult space to enforce.
This is likely to become a much larger problem after the migration to IPv6, as the maximum
size of each network is increased dramatically over what IPv4 can support. In this article, we examine the mechanisms that have been invented to allow IPv6 users to prove their rightful ownership of an address, preventing others from using it falsely, as well as showing some of the ways in which these measures are incomplete. | <urn:uuid:a4539e84-f997-47ee-a921-54eff876379c> | CC-MAIN-2022-40 | https://www.bitpipe.com/detail/RES/1561458942_882.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00578.warc.gz | en | 0.968859 | 177 | 2.546875 | 3 |
Learn more about the different challenges presented within ethical hacking.
A brief introduction to ethical hacking
Ethical hacking - also known as penetration testing or pen testing - is an activity that an individual or team performs in order to attempt to breach and compromise an organisation's assets. Through the application of deep technical experience and skills, the ethical hacking exercise works to identify cyber (or other) vulnerabilities which may lead to a breach of the asset. These assets can take the form of technology systems; physical locations; processes; and even staff members (whose behaviours are influenced via social engineering tactics.)
To start an ethical hacking engagement, an organisation will agree on a legal contract and rules of engagement with the ethical hacker or penetration testing company for which the ethical hacker works that authorise the tests to take place and waives any legal repercussions and liability for any associated business-impacting outcomes of the activity for the testers.
Ethical hacking is a form of testing the effectiveness of a target's defences: a target may have invested resources into developing and deploying defences to protect themselves from cyber-attacks and the loss or damage of their reputation, but regular testing is required to prove their effectiveness. Ethical hacking helps to validate these defences and identify weaknesses or gaps that can be remediated ahead of any genuine attacks occurring.
Why is ethical hacking important?
Ethical hacking is a critical security control and a necessary component of any organisation's cyber defence strategy. Organisations invest heavily to protect their intellectual property, and data and employ the skills of ethical hackers to provide clear evidence as to whether this investment provides the level of protection required to at least meet the risk appetite of the business.
A simple way to think about the need to perform ethical hacking is through a behaviour many of us already adopt in our private lives - when we leave our homes each day, we lock our front doors and likely 'test' to ensure the door is actually shut and locked. In exactly the same way ethical hacking can help to validate that an organisations proverbial 'front door' is secure and protecting the assets that reside behind it to an acceptable level of security.
What are the key concepts of ethical hacking?
Key concepts of ethical hacking include:
1. Remaining 'ethical'
The whole point of ethical hacking is that those delivering the tests work with, not against the target organisation. Ethical hacking aims to raise their security bar and discover vulnerabilities before someone malicious is able to. It is critical that testers remain ethical in their approach and only hack the organisation(s) that they are legally contracted to work with - ensuring they remain true to the contract's Statement of Work; Rules of Engagement; and never wander into legal 'grey areas' or beyond.
2. Keep to the Scope
It is important to keep within the defined Scope of any ethical cyber attacks, and not to edge beyond it without further approval (i.e. written consent) from the client. Scope 'creep' - as it's often called - is where ethical hackers go outside of the confirmed scope. This can lead to unintended consequences and impacts for them and their clients - which at best would damage the relationship with the client, and potentially have legal and commercial ramifications.
3. Dive deep
As a testing regime, ethical hacking involves going much further than exploring only the surface of a technology stack, or simply running an automated Nessus-like vulnerability scan. Effective ethical hackers ensure that they dive deep to find vulnerabilities that scanners cannot. Through our Swarm's own engagements, CovertSwarm regularly finds that the most interesting and impactful cyber vulnerabilities are only found through manual - person-led - ethical hacking.
4. Communicate often
Ethical hacking involves significant levels of communication with your target's team members, sometimes as much time can be spent in discussion as is deployed on actual 'hands on' vulnerability hunting. Keeping the client appraised of your activities whilst asking pertinent questions to enrich your understanding of their organisation and its underlying technologies leads to the most effective and efficient ethical hacking engagements.
The output from any ethical hacking activity is as important as the ethical hacking itself. The vulnerability reporting should clearly detail the scope, activities undertaken, test plan, findings and most importantly include clear steps to evidence, replicate and remediate the found issues. The report must be written with the intended audience in mind, and be actionable for the client whilst avoiding the need for significant levels of debriefing or explanation from the ethical hackers involved.
What are the main issues and disadvantages of ethical hacking?
The main issues and disadvantages with ethical hacking are:
1. Inconsistency of quality
Across the cyber industry, there are numerous ethical hackers and companies that offering ethical hacking and Penetration Testing services. It can be challenging for businesses to cut through this noise and to identify quality providers. The best place to start is to look at established businesses where their main focus is providing offensive security services. Ensure that you speak directly to their ethical hackers; review their accreditations; ask for client references, and review sanitised examples of previous work.
2. Ethical hackers causing system interruption
Less experienced ethical hackers are more likely to cause issues and business interruption when delivering their ethical hacking services. To mitigate this risk ensure that you always use experienced ethical hackers who understand how to limit the risks of any potential system impact during their pen test delivery. Furthermore, ask the pen test company to evidence and explain their policies, procedures and commercial insurance should an incident occur.
3. Over-reliance on automated tools
Ethical hacking should be manually led, with the specialist relying on experience and knowledge and only light assisted by automated software tools. If your ethical hacker relies heavily upon software tools such as vulnerabilities scanning engines then you will be unlikely to gain significant value from the ethical hacking engagement.
What are some limitations of ethical hacking?
Some of the limitations of ethical hacking include:
1. Ethical hacking engagements are time-limited
By this, we mean that the limiting factor of coverage the ethical hackers can achieve (and so assets they can effectively pen test) is limited by the time allocated to their snapshot engagement. The more time allocated, typically the more coverage will be obtained.
2. Scope limitations
Organisations normally limit the scope of their ethical hacking engagements. This can be as they have limited budgets or are concerned that the ethical hacker will find issues in certain areas beyond the provided scope. Unfortunately, this is a counterproductive posture to take: the purpose of any cyber testing is to identify risk and to then take action to either mitigate, reduce or accept it. As such it is in the client's best interests that the ethical hacker has the ability to cover as much ground as possible to identify, evidence and prove where vulnerabilities exist and how they can be exploited.
3. Ethical hacking, which is not actually ethical hacking
We regularly see reports of organisations procuring Ethical Hacking and Penetration Testing engagements which turn out to be quite different to what they purported to be: all too frequently we see basic vulnerability assessments or vulnerability scans being delivered instead where the 'ethical hacker' will simply run one of the many popular vulnerability scanning tools and rehash the tool's report - rather than applying their skillset or experience to unpicking and identifying vulnerabilities.
4. The BIG ONE - ethical hacking is only 'point in time'
Your organisation is constantly changing and one of the key limitations of ethical hacking is that it delivers only a snapshot of your cyber security health at that point in time. Due to the constant change incurred by successful organisations upon themselves, the ethical hacking reports they receive are out of date the moment they are published. Modern, constant cyber-attack offerings such as that delivered by CovertSwarm solve this problem by keeping pace with the target organisation's rate of change, and effectively close their cyber risk gap.
What are the ethical implications of hacking?
If you or your employed researchers are hacking legally there are no ethical implications. Only ever hack devices that you own and have full control of, and where there is no breach of any Terms Of Services (or use) in doing so if your estate is in a hosted environment. Always ensure that you hack organisations where you have a contract in place with them that provides legal permission and associated liability waivers, and adhere to any relevant country "communication" laws.
Under no circumstances ever attempt to 'hack' any organisation where you do not have the full and complete legally binding permission to do so. This includes exploring services, outside of something you 'own' and have complete right of control over. For example, if a device you have communicates back to an organisation via an API you should not target said organisation unless explicit permission exists as part of their own bug bounty program, or other similar permission has been granted.
What is an ethical hacker and how are they different to malicious hackers?
An ethical hacker is an individual who acts legally and ethically in all aspects of their hacking. They will only hack organisations where they are authorised to do so and have all of the necessary legal contracts and permissions in place.
A malicious hacker is someone that operates to breach and gain access to an organisation's data, systems or services via illegal means and for illicit gain - often breaking multiple laws as part of their activity.
Can cyber-criminals become ethical hackers?
Yes, there have been a number of examples where previously malicious hackers and cybercriminals have changed their ways to become ethical hackers.
Is being an ethical hacker fun?
Absolutely. In fact, we think it's one of the most fun, creative and rewarding careers that exist.
How to learn ethical hacking and become a certified professional
Why Choose Ethical Hacking as a career?
When you become an ethical hacker you face a different and unique challenge every day - getting paid to learn, explore and exploit an array of technologies. What could be better than that?
Furthermore, the career offers outstanding progression opportunities and infinite areas for learning and development for its practitioners.
What are the basic requirements to learn Ethical Hacking?
In CovertSwarm's view, the basic requirements to learn ethical hacking are:
To have a willingness to learn and be curious;
To have at least a foundation-level technology knowledge (in any area) to build your hacking knowledge upon;
To want to help organisations protect their assets and intellectual property from genuine attack.
What skills or experience do I need to already have, before starting to learn ethical hacking?
You need very little initial experience to start your career as an ethical hacker. Begin by getting involved in the cyber community and speaking to other, more experienced ethical hackers. There is a wealth of excellent, free information available online to support you, backed by an open and engaging cybersecurity community. Hey, you are reading this blog right? So you've already made your first step!
In terms of skills - none are specifically required but a foundation technical skillset will definitely help. Some of the very best ethical hackers have previously been developers or infrastructure engineers.
What methods of study are available for Ethical Hacking courses?
You can use a number of methods of study to learn ethical hacking. For example:
Numerous great books are available on Amazon;
There are numerous Audiobooks;
There are various online courses available, some are even free;
There are companies offering classroom-led training;
You can learn from peers in the community, just get involved!
For specific recommendations drop an email to us and we'd be happy to speak with you.
Why take an ethical hacking course?
A prescriptive course can help you to focus on a set syllabus of materials with key outcomes and takeaways forming a rewarding part of your knowledge capture.
What free Ethical Hacking courses are available for study?
Here are some great free Ethical Hacking courses that we recommend:
Why are virtual machines important for ethical hacking?
Virtual machines are important for ethical hacking for the following three reasons:
Virtualisation allows you to run multiple operating systems from within a single 'host' system. This enables tools that are designed for different operating systems to be used alongside one another as part of your ethical hacking activity and toolkit.
Virtualisation enables the ability to 'sandbox' your ethical hacking activities so that you can test part of your ethical hacking approaches - such as exploit and payload development - in isolated VM environments before performing them against your intended, real target.
Should an unexpected issue occur, for example with a tool or a system fault - you can quickly roll back to a previous machine 'state' or snapshot with ease.
How to find a trusted professional ethical hacker
To find a trusted professional ethical hacker, start with a company that specialises in offensive security services; penetration testing (pen testing) and ethical hacking. These companies will have professional ethical hackers working for them directly as an employee and will typically have procedures in place to validate their ethical hacker's skills and capabilities whilst also performing necessary background checks to ensure they are indeed ethical.
Organisations, where ethical hacking and penetration testing are the main business lines, are usually the best places to start.
Using organisations that themselves accredit and audit penetration testing companies is usually a great place to start, such as The Council of Registered Security Testers (CREST). CREST accredit penetration testing companies, and for excellent service delivery and quality assurance, we recommend working with companies like CovertSwarm who are additionally Simulated Targeted Attack and Response (STAR) CREST accredited - having been through additional audits and quality/capability checks. As one of the very limited numbers of STAR accredited companies, CovertSwarm also offers Intelligence-Led Penetration Testing services.
A final note - when appraising the ability of your ethical hacking vendor be sure to ask for sanitised examples of their recent work, and specific experience in your sector. | <urn:uuid:1b8c73a6-258c-4fd1-b329-77cf1f8b9bcc> | CC-MAIN-2022-40 | https://www.covertswarm.com/post/what-are-the-challenges-of-ethical-hacking | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00578.warc.gz | en | 0.938338 | 2,825 | 3.359375 | 3 |
Vulnerability assessment is the way to identify weakness in the system. Vulnerability itself means “Weakness” and Vulnerability assessment means how to find weakness in a system and create a report accordingly. On the other hand, Penetration testing is used to identify vulnerable areas of an organisation and exploit it by performing multiple tests to identify the loopholes and its mitigation process.
Vulnerability assessment and Penetration testing both are also recognized as VAPT.
Vulnerability assessment tools discover which weaknesses are present and where these weaknesses are located. Penetration testing exploits the vulnerability of a system through unauthorized access or malicious activity and tries to identify which errors pose a threat to the application.
Sometimes organizations suffer from massive attack or loss that could have been evaded if protection measures were taken at the right time. Incidents contain events such as attack in network infrastructure due to low security configures in devices, data loss, unauthenticated and unauthorised access, information leakage and many others. VAPT helps to provide an analysis to proactively identify the vulnerability of the organisation’s system and provide appropriate action accordingly.
Moreover, it saves time and money to solve the further issues related to cyber attack.
Let’s discuss the steps for Vulnerability Assessment.
Performing Vulnerability Assessment
Vulnerability assessment starts with vulnerability scan. Scan can probe systems, network infrastructure, servers, web-applications that may be exploited by any hacker or intruder. Scan tools perform scanning on a periodic basis and generate reports which show differences between the scans, performed on different dates, time, month and provide an overview of risk in the environment to the administrator.
Vulnerability Scan further divided into four different categories
- Network Discovery Scan
- Network Vulnerability Scan
- Web-application Vulnerability Scan
- Database Vulnerability Scan
1. Network Discovery Scan
As the name indicates, Network discovery is used to scan the IP subnets, IP addresses, open network ports and associated application ports which are open in the network.
Common techniques to identify open ports on remote machine or remote system are as follows:
- TCP SYN Scanning: Test SYN flag and scanned port by sending SYN packet. If the tester receives a response with ACK flag which means the port is open and the system/network device is vulnerable.
- TCP Connect: If the tester is able to access the system on a specific port which means the remote system is open to connect with the unknown sender.
- TCP ACK: Sends TCP ACK packet to check if traffic is allowed by any in-between Firewall.
- Xmas Scanning: send a set of packets which has FIN, PSH and URH flag if accepted by the network devices which means multiple interface security is not allowed in the network devices.
Most common scanner is Nmap which helps to identify the connection status in the network by performing a scan.
2. Network Vulnerability Scan
Network Discovery scan only focuses on open ports in the network whereas network vulnerability scan detects and investigates the vulnerability in a deeper way. Network Vulnerability tool has a huge database of known vulnerabilities which is used to identify if a system is weak in terms of security.
Scanner matches the vulnerability in its database and checks the criteria with the found vulnerability and generates the analysis based on the output of analysis.
Scans are further divided into low-risk scan and high-risk scan. Nessus is the device which is a commonly used vulnerability scanner in industries.
3. Web Vulnerability Scan
Focus areas of Web Vulnerability Scan are servers which have handled multiple applications. Firewalls and other network devices provide protection to those servers and their applications from the outside world. Attackers often try to get access to the application through unauthorized approaches. Here, Web-Vulnerability scanners perform appropriate tasks to avoid any malicious attack on the servers.
When administrators run the scanner tool in the network, the tool examines the web application using computerized techniques that operate inputs and other restrictions to identify web vulnerabilities.
Task performed during Web-Vulnerability Scan:
- Initially scan all the known applications in the web server and create a report accordingly.
- Scan new applications which are recently deployed and check the vulnerability.
- Scan modified applications that are customised by the administrator as per the requirement of code.
- Scan unknown applications and try to find any bug or malicious code in the server.
- Scan all the applications altogether on a repeated basis and create a comparison report to analyse the pattern of traffic flow for the servers and respective applications.
4. Database Vulnerability Scan
Databases cover the most delicate and sensitive data and are worthwhile targets for hackers and attackers. Databases are protected by firewalls, local servers, routers from direct external access. Most common attack performed on a database is SQL Injection Attack. Database vulnerability scanners are the tools that perform professional scan to databases and associated servers.. sqlmap is an open source commonly used scanner that allows tester team to probe scanning on the database servers.
Penetration testing is a practical way to exploit the system along with management approvals. Vulnerability assessment focuses more on reports and identification of weakness in the system whereas Pentest (Penetration Testing) emphasizes on a practical approach to hijack the system and try to identify the root cause of vulnerability. Penetration tests are performed by trained and experienced professionals.
- Planning: Most time-consuming phase in which tester and management discussed the scope of the test.
- Information gathering and discovery: In this phase tester/team performs reconnaissance to identify system functions and conducts network discovery scans to determine open ports in the network. It could be both a manual and automatic tool.
- Vulnerability scanning: examinations of system weaknesses by using methods like network vulnerability scans, web vulnerability scans, and database vulnerability scans.
- Exploitation: use manual and automated exploit tools to check and identify system security.
- Reporting: Both testing team and administration evaluates the output of penetration testing and makes recommendations for improvements to system security.
Penetration testers may be inside employees or external testers who perform these tests as part of their duties or external consultants hired to perform penetration tests.
Categories of Penetration test
The Penetration test are generally categorized into three groups:
- White Box Penetration Test: attackers know the background of the network, he/she has detailed information of the system in which testing will be performed. In this approach knowledge of internal data structure, backend sources and architecture will be shared and analysed by examiner,
- Black Box Penetration Test: No prior information of network infrastructure is shared with the tester. External attacker performs an attack without any prior information of the target system.
- Gray Box Penetration Test: It is the combination of white box and black box testing. Advantages and disadvantages of both black box and white box testing are used in this technique.
Comparison: Vulnerability Assessment vs Penetration Testing
Finally let’s illustrate the difference between Vulnerability Assessment and Penetration Testing
|Definition||It is a process of identifying, quantifying, and prioritizing the vulnerabilities in a system.||It describes the intentional launching of simulated cyber attacks by white hat penetration testers to gain the access of network.|
|Work||Identify the weakness of system and generate report of vulnerability scans.||Eliminate the vulnerability of system and provide report to higher management.|
|Type of Network||Target non-critical environment.||Target real network and critical systems.|
|Scope||Perform scans by using automatic tools and provide report according to the output of analysis.||Documentation of requirement, review and then perform test in the live network.|
|Time-Consume||It is a constant process, however less time consuming.||Less time consumed.|
Download the comparison table: Vulnerability scan vs Pen test
VAPT has different functionalities and methods, so it depends upon the infrastructure and organisation’s network architecture to select the best approach according to the requirement. A vulnerability assessment tries to expand and improve security of a system and develops a more documented method to achieve a secure network. On the other hand, penetration testing only gives a picture of your security program’s weakness and mitigation.
But professionals suggest that, as a part of the security management system, both techniques should be performed regularly to confirm a secured environment.
Thanks for reading!!!! | <urn:uuid:49542e52-26d8-4e3b-b791-e7abc9886e2b> | CC-MAIN-2022-40 | https://networkinterview.com/vulnerability-assessment-penetration-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00578.warc.gz | en | 0.905163 | 1,767 | 3.125 | 3 |
The Technologies of Trust: Protecting Against Email Fraud and Scams
Email has long been one of the primary methods of attack for cybercriminals. Fighting back against email threats requires human involvement—people need to know the signs of spam or phishing, avoid clicking links and downloading attachments, and treat unusual requests with suspicion.
But user training and awareness—while absolutely necessary—aren’t your only weapons against email threats. Email security protocols help combat many of the major vulnerabilities inherent in email. Today, we’ll discuss a few of these protocols and what they‘re meant to combat.
When it comes to email, most of the security frameworks are built to help establish the identity of the sender. Most scammers try to trick people via impersonation. Whether it’s forging a sender address, a sender display name, or masquerading as a legitimate third party like PayPal or a bank, scammers often pose as someone else to accomplish their attacks.
Most underlying email security technologies seek to prevent this by proving the trustworthiness of a sender or an email. Here are some of the policies:
Domain Keys Identified Mail (DKIM) is a protocol that uses cryptography to verify an email was sent from the domain it claims to be from. When an email gets sent, DKIM affixes a DKIM signature containing a hash generated from both the header and body of a message to an email. Once the email is received, the receiving server can look up the public key in the sender’s DNS records and use it to decrypt the email. If the DKIM signature’s hash is valid, it helps the recipient verify the message came from the original sender and has not been altered in any way in transit.
DKIM also improves the deliverability of your emails. If using DKIM, you can reduce the likelihood your emails will be marked as spam by recipients. This helps keep your sender reputation high overall and helps improve the ability for your organization to continue sending emails without getting blocked.
Sender Policy Framework (SPF) also helps make sure an email comes from a legitimate source. SPF helps receiving email servers verify an incoming email comes from an IP address approved by the sender. The server simply looks up an SPF entry in the sender’s DNS records to ensure the domain is authorized. For instance, if you set up your domain as “example.com,” you would include IP addresses for your mail server as well as any cloud services that will send email on your behalf. This will help prevent unauthorized senders from delivering email claiming they’re from your domain. However, by itself, SPF isn’t amazingly powerful. It’s best when used in combination with the DMARC and DKIM.
Domain-based Message Authentication, Reporting, and Conformance (DMARC) expands and works in concert with both DKIM and SPF. DMARC is also placed in the domain’s DNS records, and helps the sender specify which framework they’re using when sending email—SPF, DKIM, or both. It allows the sender to specify how to treat emails that don’t authenticate–including quarantining, rejecting, or deleting them. It also provides reporting back to the sender on the emails that weren’t authenticated so the sender knows both the health of their email and can see if there’s potential malicious activity using your domain. This allows senders to actively warn users that someone is attempting to phish them using your domain name. By paying attention to these reports, IT admins can actively protect a company’s email recipients.
Fighting back against email attacks
These three frameworks help reduce email fraud. IT providers should use all three to help enhance their security postures and do their part in the fight against email scams.
The real challenge with providing good email security, however, is cybercriminals frequently change their tactics. While these three frameworks work to establish the authenticity of the sending domain, cybercriminals can use other tactics. As a few examples, domain misspellings are common in spam and phishing messages—or someone can sign up for a free cloud email address, pose as a friend, and claim it’s their new email address. Plus, most anti-spam technologies were historically set up around rule-based filtering and whitelists/blacklists. This often means filtering programs are a step behind some threats.
SolarWinds® Mail Assure was built using artificial intelligence and machine learning to help prevent email attacks. For example, the technology was built to help catch morphing viruses that may not have previously been discovered. Mail Assure’s proprietary email filtering technology incorporates input from processing large volumes of email data and combines it with both real-time pattern threat recognition and collective threat intelligence to help protect your users against emerging email-borne threats. Plus, Mail Assure’s technology fully supports SPF, DKIM, and DMARC, enabling customers to take every measure possible to help prevent impersonation attacks. | <urn:uuid:3f551bb8-1d37-430d-9fd5-a35c1afeed77> | CC-MAIN-2022-40 | https://www.msspalert.com/cybersecurity-guests/protecting-against-email-fraud-scams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00578.warc.gz | en | 0.928755 | 1,050 | 3.09375 | 3 |
You’ve probably heard stories in the news on a fairly regular basis about an organization’s data being compromised in a cyber attack.
In efforts to steal data online, acquiring data security compliance with the Securities and Exchange Commission (SEC) as well as other security standards has become a higher priority for businesses large and small. In this piece, we’ll look at what data security compliance means and why it is important for your organization.
What is data security?
To boil it down, data security is an organization’s ability to manage internal data — whether how it is stored, copied, shared, or destroyed — in a way that does not make the data susceptible to theft or tampering. With businesses’ pouring tremendous resources into acquiring and utilizing data, any exploited weakness in the security of said data systems can mean an enormous loss of revenue in the form of lost data, but also in the trust of customers as well as investors. In order to protect the sensitive information of citizens as well as organizations from hardships due to data security breaches, many legal standards have been established to limit data security compromises.
How does data security compliance differ from just data security?
While the concept of data security is a very broad concept, data security compliance in regard to receiving a “passing grade” from governing bodies as well as other non-governmental standards be quite granular. The U.S. Securities and Exchange Commission (SEC), for example, has strict standards for the security requirements of those seeking SEC data security compliance. While a data security professional may note that there are few actual specific standards for aspects of, say, data encryption or firewall protection within the SEC regulations, high overall standards are still set. The burden of developing and redeveloping systems that meet these standards typically falls on the organizations themselves and their IT departments. Cybersecurity professionals are, in a way, tasked with the arduous task of staying one step ahead of hackers, phishers, and other thieves of sensitive data.
How can organizations keep up with SEC regulations?
So, how can businesses and other organizations keep up with SEC regulations? One approach to protecting sensitive data as well as remaining up to date with SEC data security compliant standards is seeking out the assistance of experienced cybersecurity professionals. Yet another approach is to “batten down the hatches” with an electronic content management system that has such security features built into its infrastructure. Perhaps the most prudent approach is to invest in a solution that includes both of the previously mentioned measures. | <urn:uuid:ddfff98b-8298-4dd6-8d54-b9359aadd1a4> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/148/why-is-data-security-compliance-important-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00578.warc.gz | en | 0.959871 | 511 | 2.578125 | 3 |
Intel is partnering with an Oregon-based university to develop computing technologies for managing volumes of biomedical data that scientists use to detect complex diseases and discover new treatments.
Oregon Health & Science University and Intel aim to create information tools and workflow offerings to help the scientific community understand genetics behind diseases like cancer at a patient level, the university said Monday.
Stephen Pawlowski, senior fellow and chief technology officer at Intel’s data center and connected systems group, said the collaboration aims to leverage Intel’s experience in developing computing technology with OHSU’s background in visualizing and understanding biological information.
A team comprising of OHSU biomedical experts and Intel engineers is building a research data facility equipped with an Intel supercomputing system for the project.
The multiyear project will have researchers initially focus on molecular profiling patients’ tumors to study how a disease progresses and use this information to monitor tumor response to treatment, OHSU says.
Computer scientists, biophysicists, genomicists, bio-informaticists, biologists and others will work with researchers at the OHSU Knight Cancer Institute for the project. | <urn:uuid:f5e056b8-2542-437a-be48-7d38ef84940b> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2013/04/intel-university-partner-to-build-biomedical-data-mgmt-tech-stephen-pawlowski-comments/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00778.warc.gz | en | 0.903189 | 231 | 2.625 | 3 |
What makes Industrial Machines Smarter and Intelligent in the Future?
Artificial Intelligence has emerged as one of the prime disruptive forces cutting across different industries –well poised to influence the next industrial revolution. Hydraulic energy, Electrical Energy and Computer Aided Design were the primary forces of influencing past three industrial revolutions – now the time has come for AI driven cyber physical systems to change the landscape of manufacturing in industries. Machines and processes communicating with each other without human intervention is the vision of fourth industrial revolution.
What is making this revolution to be closer to reality? Artificial Intelligence for Industry 4.0 is closer to reality now because of rapid advancements of few technologies fueling this transformation. Examples of such technological advancements are - deeper intelligence at sensing nodes and actuators, deep learning advancements in machine vision with embedded processors, 3D printing, distributed security using block-chain, ubiquitous connectivity in industrial communications with low latency and high data rates and enhanced storage capacity per 2mm of silicon at lowest power.
Thanks to rapid progress in innovation in embedded sensing and MEMS – a wide variety of rugged industrial sensing using fusion of multiple sensors like magnetics, ultrasonic and optical sensors is a reality today. We would see more and deeper intelligence at the sensing nodes evolving in coming years to ensure that the data available at the edge nodes is good enough to take the decisions independently without reaching the cloud. Think of a recipe driven manufacturing in pharma industry – a bottle of syrup to be manufactured – will go through a conveyor belt and will clearly know at what level the liquids needs to be mixed and what kind of labeling needs to be added in the bottle without any manual intervention. Think of the raw materials needed for the syrup that are automatically updated in the backend systems for robots to re-place and deliver the new liquids through efficient and predictive supply chain management. Embedded sensing when enabled with smart analog frontends and tightly coupled embedded processing are well poised for correcting non-linearity and imperfections in measurement and positioning systems.
Control frequency and sampling rate of industrial applications require very low latency communication protocols – like Ethernet switches at the industrial networks need few micro seconds latency in controlling the actuators like stepper motors. A hybrid utilization of wired and wireless connectivity enables a highly distributed network of intelligent machines and parts on the factory floor or in warehouses. Assigning IP addresses directly into machines and transducers is highly desirable in intelligent factories. With advancements of 5G networks, technologies like narrow band internet of things (NB-IoT) would be able to uniquely identify each equipment using an IP address in the factory floor digitally and would be able to communicate independently. Robotic actuators with flexible sensing and arms would make the ware-house automation to next level. E-commerce fulfillment warehouses are under tremendous pressure to meet increasing demands for fast, accurate order fulfillment despite having huge amount of labor-force in countries like India. We see increased number of startups in this area of Automation to optimize the inventory and delivery costs for e-commerce giants.
Biggest resistance and threat to the industry 4.0 revolution is the data privacy of the manufacturing houses and the security of their internal data. Blockchain as a distributed security technology is well poised to enable the distributed asset management with highest security for factory floors. Embedded security capabilities at each of the sensing nodes with secure re-provisioning capabilities, will make this barrier go away soon and industries would adapt wireless networks in factory automation segments.
Transferring high data rate information and wideband signaling for measurement and control across isolation barriers is increasingly important in industrial systems. Robustness against high voltage surges is indispensable for industrial interfaces as the cyber physical systems are highly integrated with high voltage and low voltage circuitry.
We see convergence of many technologies enabling the artificial intelligence to make the machines smarter. Microcontrollers made for deeper intelligence, embedded processors with highest compute power, intelligent sensing nodes with sensor fusion, integrated non-volatile storage in embedded processing at lowest power, high precision and ultra-low latency communication systems to control the robotic actuators, distributed security capabilities with blockchain technology at the factory equipment, ubiquitous connectivity with unique digital identity are the few that semiconductor industry is driving at rapid pace to make such a transformation reality soon. Think of machines directly self-repairing itself and calls upon the operator for human intervention only when needed – what a way to avoid night shift operations by humans in factories.
Overall this era will see a fusion of digital, physical and biological systems together to form future industrial machines – which will drive real-time optimization of the entire manufacturing flow with very minimal human intervention, with the complete usage of fossil fuels and produces zero industrial wastes ensuring green and safe environment for us to live peacefully. Smarter Industrial machines are not a distant dream anymore as we see the transformations and adaptions in the industry happening faster. Will the Internet of bio-nano Things (IOBNT) add a biological advancement to the machines for human like flexibilities and movements in factories? Time will answer. | <urn:uuid:c7a5da25-e143-4945-b4c5-e9a95515b485> | CC-MAIN-2022-40 | https://point-of-sale.ciotechoutlook.com/cioviewpoint/what-makes-industrial-machines-smarter-and-intelligent-in-the-future-nid-3890-cid-74.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00778.warc.gz | en | 0.908147 | 1,005 | 2.90625 | 3 |
Most technology experts list cloud computing as one of the most influential IT trends of the twenty-first century. Over the past two decades, cloud computing has revolutionized enterprise IT to the point where most organizations now take a “cloud-first” approach to their technology needs.
According to the MIT Technology Review, the phrase “cloud computing” was probably first used in the mid-1990s at a now-defunct company called NetCentric. Then in 1999, Salesforce.com introduced the idea of accessing enterprise applications through a Web browser, a concept that grew into the software as a service (SaaS) model. In 2002, Amazon launched Amazon Web Services, and four years later it debuted its Elastic Cloud Compute (EC2) service, the first widely used infrastructure as a service (IaaS) offering.
Since then, cloud computing has become nearly ubiquitous among enterprises; several research reports have found that between 90 and 100 percent of companies use at least one cloud service. The boom in cloud has prompted great growth in related fields, particularly cloud analytics.
Most experts expect the cloud computing market to continue to grow as organizations migrate more applications and data to the cloud. IDC predicts, that public cloud spending will reach $122.5 billion in 2017, a 24.4 percent year-over-year increase. It anticipates that public cloud spending will grow seven times faster than overall IT spending through 2020, at which point the market will be worth $203.4 billion. Clearly, the advantages in cloud costs – the desired to trim data center costs – have driven cloud adoption.
Gartner, which includes more types of services in its public cloud forecasts, says, “The worldwide public cloud services market is projected to grow 18 percent in 2017 to total $246.8 billion, up from $209.2 billion in 2016.” And it adds, “Gartner predicts that through 2020, cloud adoption strategies will influence more than 50 percent of IT outsourcing deals.”
On the private cloud side, IDC forecasts that spending for on-premises private cloud infrastructure will increase 16.6 percent this year. Similarly, Gartner says, “The use of private cloud and hosted private cloud services is also expected to increase at least through 2017.”
Yet despite the massive growth in cloud usage, confusion persists about what exactly cloud computing is and the benefits it offers to enterprises.
What is Cloud Computing?
A lot of different organizations have put together definitions of cloud computing, but the definition that is probably accepted the most widely throughout the technology industry comes from the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce. First released in September 2011, the complete cloud computing definition publication runs for seven pages and is too long to include here. It includes the following five “essential characteristics” that all cloud computing environments share:
- On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
- Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
- Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
- Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
- Measured service: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. To codify the technical aspects of what a cloud vendor provides, every customer needs a Service Level Agreement.
Gartner summarizes those key points in its definition of cloud computing, which says cloud computing is “a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies.” Moreover, cloud leverages a number of key technologies that boost the efficiency of software development, including containers, a method of operating system virtualization that allows consistent app deployment across computing environments.
Cloud computing represents a major generational shift in enterprise IT.
Cloud Computing Services
A lot of different types of a cloud services fall under the overall category of cloud computing. The NIST cloud computing definition identifies three cloud service models: software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).
Those three models continue to dominate cloud computing, but various vendors have also introduced other types of cloud services that they market with the “as a service” label. For example, vendors offer database as a service (DBaaS), disaster recovery as a service (DRaaS), function as a service (FaaS), storage as a service (SaaS), mobile backend as a service (MBaaS), security as a service (SECaaS), networking as a service (NaaS) and a host of others. Some people lump all these various cloud services together with the label “everything as a service” or “XaaS.”
However, most of these other types of cloud computing services fall under one of the three major categories:
In the SaaS model, users access applications via the Web. Their application data resides in the software vendor’s cloud infrastructure, and users can get to it from any Internet-connected device. Instead of paying a flat fee, as is typical with traditional software, users purchase a subscription, often on a monthly or yearly basis.
The world’s largest SaaS vendors include Salesforce.com, Microsoft, Intuit, Google, ADP, SAP, Oracle, IBM, Cisco and Adobe.
IaaS vendors provide access to computing, storage, networks and other infrastructure resources. Using an IaaS is very similar to using a server, storage appliance, networking device or other hardware, except that it is being managed as a cloud rather than as a traditional data center.
Amazon Web Services is the leading public IaaS vendor. Others include Microsoft Azure, Google Cloud, IBM SoftLayer and VMware vCloud Air. Organizations like HPE, Dell Technologies, Cisco, Lenovo, NetApp and others sell infrastructure that allows enterprises to set up their own private IaaS services.
You can think of PaaS as the middle ground between IaaS and SaaS. PaaS solutions don’t offer applications for end-users, the way SaaS vendors do, but they offer more than just the infrastructure provided by IaaS solutions. Typically, they bundle together the tools that developers will need to write, deploy and run their own applications. For example, a PaaS might include the operating system, database, Web server, content management system and development tools that an organization might need to set up a Web application. PaaSes are meant to be easier to use than IaaS offerings, but the line between what counts as IaaS and what counts as PaaS is sometimes blurry. Most PaaS offerings are designed for developers, and they are sometimes called “cloud development platforms.”
The list of leading public PaaS vendors is very similar to the list of IaaS vendors, and it includes Amazon Web Services, Microsoft Azure, IBM Bluemix, Google App Engine, Salesforce App Cloud, Red Hat OpenShift, Cloud Foundry and Heroku.
Private vs. Public vs. Hybrid
You can also categorize cloud computing services based on deployment model. In general, organizations have three different cloud deployment options: public cloud, private cloud and hybrid cloud. Each has its own strengths and weaknesses.
Just as the name suggests, a public cloud is available to businesses at large, for a wide variety of remote computing needs. These cloud services are managed by third-party vendors and hosted in the cloud vendors’ data centers. A key benefit here is that organizations don’t have to worry about buying, deploying, managing or maintaining hardware because the vendor takes care of that for them. On the other hand, public cloud users give up the ability to control the infrastructure, which can raise security and compliance concerns. Many vendors offer cloud cost calculators to help users understand the likely charges ahead of time.
The public cloud enables companies to tap into remote computing resources.
A private cloud is a cloud computing environment used only by a single organization. It can take two different forms: organizations can build their own private clouds in their own data centers or they can use a hosted private cloud service. Like a public cloud, a hosted private cloud is operated by a third party, but each customer gets dedicated infrastructure set aside for its needs rather than using shared servers and other resources.
A private cloud allows organizations to enjoy some of the scalability and agility benefits of cloud computing without some of the security and compliance concerns that can arise with a public cloud. However, a private cloud is generally more expensive and more difficult to maintain than a public cloud.
The private cloud allows a company the control and security needed for compliance and other sensitive data issues.
A hybrid cloud is a combination of a public cloud and a private cloud that are managed as a single environment. This sort of arrangement can be particularly beneficial when enterprises have some data and applications that are too sensitive to entrust to a public cloud but that need to be accessible to other applications that do run on public cloud services. Hybrid clouds are also helpful for “cloudbursting,” which involves using the public cloud during spikes in demand that overwhelm an organization’s private cloud. Managing a hybrid cloud can be very complex and requires special tools.
It’s important to note that a hybrid cloud is managed as a single environment. When organizations have more than one cloud— public, private and/or hybrid — that they manage independently, experts call it a “multi-cloud environment.” Already, the average enterprise is using more than one cloud, and most market researchers expect multi-cloud and hybrid cloud environments to dominate the enterprise for the foreseeable future.
The hybrid combines the various cloud models to enable great flexibility and scalability.
Cloud Computing Benefits
As already mentioned, each type of cloud computing has its own unique advantages and disadvantages. In general, however, all types of cloud computing offer the following benefits:
- Agility and Flexibility: Cloud environments enable end users to self-service their own needs. So whether it is a private cloud or a public cloud, users should be able to quickly provision the resources they need for new projects. In addition, because cloud environments are virtualized and pool resources, organizations can move workloads around to different servers and expand or contract the resources dedicated to a particular job as necessary.
- Scalability: The same virtualization and pooling features that make it easy to move workloads around also make it easy for organizations to scale up or down as usage of particular applications increases or decreases. It is somewhat easier to scale in a public cloud than a private cloud, but both offer scalability benefits in comparison to a traditional data center.
- Availability: Again, because resources are virtualized in a cloud environment, it’s easier to recover if a particular piece of infrastructure experiences an outage. In most cases, organizations can simply failover to another server or storage device within the cloud, and users don’t notice that a problem has occurred. And again, the availability benefits are even more pronounced for public clouds than private clouds.
- Location Independence: Users access all types of cloud environments via the Internet, which means that they can get to their applications and data from any Web-connected device, anywhere on the planet. For enterprises seeking to enable greater workforce mobility, this can be a powerful draw.
- Financial Benefits: In general, cloud computing services are less expensive traditional data centers. However, that isn’t always true in every case, and the financial benefit varies depending on the type of cloud service used. For all types of cloud, however, organizations have a greater ability to chargeback computing usage to the particular business unit that is utilizing the resources, which can be a big aid for budgeting.
Cloud Computing Drawbacks
Of course, cloud computing also has some drawbacks that offset its benefits. First of all, demand for knowledgeable IT workers remains high, and many organizations say it is difficult to find staff with the experience and skills they need to be successful with cloud computing. Experts say this problem will likely diminish over time as cloud computing becomes even more commonplace.
In addition, as organizations move toward multi-cloud and hybrid cloud environments, one of their biggest challenges is integrating and managing all the various services they use. Some organizations also experience problems related to cloud governance and control when end users begin using cloud services without the knowledge or approval of IT.
But the most commonly cited drawbacks of cloud computing center around cloud security and compliance.
Most of the security concerns around cloud computing relate primarily to public cloud services. Because enterprises don’t have control over the physical infrastructure hosting their data and applications in the public cloud, they need to make sure that the vendor is taking adequate measures to prevent attacks and meet compliance requirements. In addition, because public clouds are shared environments, organizations have concerns that another organization using the same service might be able to gain access to their data.
However, some security experts argue that public cloud services are actually more secure than traditional data centers. Most cloud vendors have large security teams, and they employ the latest technologies to prevent and mitigate attacks. Smaller enterprises simply don’t have as many resources to devote to securing their networks.
In any event, organizations should not just assume that their cloud vendors have appropriate safeguards in place. Vendors and users share responsibility for cloud security. Before using any cloud computing service, organizations should investigate the vendor’s security precautions. In addition, there are many steps cloud users can take on their end to make themselves more secure, and organizations need to make sure they have adequate cloud security policies and technologies in place.
Cloud Computing Companies
If you’re ready to start experimenting with cloud computing, you might want to try one of the following top cloud companies. Most offer free trials or free tiers for their services so that you can see the benefits they offer without making a financial commitment.
- Amazon Web Services — AWS is the leading IaaS and PaaS vendor, and it has a very extensive portfolio of services available. Many of those services include free tiers that allow users to try them out with no charge.
- Microsoft — Microsoft now offers much of its most popular software, including Office and its Dynamics enterprise software, on a SaaS basis. And Microsoft Azure is the second largest IaaS and PaaS vendor. Microsoft’s cloud computing platform is particularly popular with enterprises that have hybrid clouds.
- IBM — Like Microsoft, IBM also has SaaS, PaaS and IaaS capabilities. In addition, it sells software, hardware and services to organizations interested in setting up private clouds.
- Google — Google’s G Suite includes SaaS office productivity tools that compete with Microsoft Office. And the company also has very popular IaaS and PaaS offerings.
- Oracle — Like IBM, Oracle offers SaaS, PaaS, IaaS and private cloud solutions to its customers. It launched its cloud services later than the other leading technology vendors, but its cloud has been growing rapidly.
- Salesforce.com — Salesforce is best-known for its enterprise SaaS solutions, but it also has two different PaaS solutions: App Cloud and Heroku Enterprise.
Looking for a cloud provider? Read our cloud comparison guide. | <urn:uuid:49590084-1908-481c-9bab-a42c5f9157e6> | CC-MAIN-2022-40 | https://www.datamation.com/cloud/what-is-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00778.warc.gz | en | 0.940584 | 3,442 | 2.5625 | 3 |
As technology has evolved, buildings have become smarter. Smart buildings using Critical Event Management (CEM) systems are even more technologically advanced. With integrated dashboards, smart buildings help building managers link multiple building systems and devices together, increasing the comfort, safety, and security of their occupants.
Smart buildings equipped with CEM systems can prevent unauthorized visitors from entering the space, expose and mitigate the impact of cyber-attacks, monitor and limit the impact of natural disasters, and cap capacity in building spaces. In addition, smart building technology can also increase the productivity, engagement, and well-being of employees.
Today, the smart building market is large and growing larger, withstanding the trend of hybrid work environments. According to the Smart Buildings from Fortune Insights, the global smart building market is projected to grow from $68 billion in 2021 to $265 billion in 2028. Despite the size and credibility of the market, myths about smart buildings still flourish. In this article, we’ll explore five of the most common misconceptions about smart buildings and debunk them.
Myth One: Smart Buildings Only Reduce Energy Use
Despite modern smart buildings’ advanced capabilities, most U.S. employees associate smart buildings with energy efficiency and environmental monitoring. This view is surprising given the history of smart buildings.
Smart buildings first came to fruition between 1975 and 1985, during the energy crisis. At this point in time, energy prices increased at historically high levels: increasing more than 8% per year on average. To reduce costs, building owners invested in automated processes to control heating, air conditioning, and ventilation. These smart buildings of their time reduced energy consumption and decreased costs, but their systems still worked in silos.
With the creation of the Worldwide Web in the 1990s, building systems went online and became more integrated, but the focus continued to be energy efficiency and cost reduction. In the 2000s, true smart buildings evolved with wireless connectivity, Internet of Things (IoT) devices, and advanced analytics: giving building owners remote control capabilities and valuable insights into their buildings’ systems, almost in real-time.
Today, smart building capabilities go well beyond energy management and environmental monitoring. Some modern smart buildings include Critical Event Management (CEM) systems with advanced technological capabilities. With CEM, organizations can anticipate, detect, manage, and mitigate the impact of threats while providing building automation and control of the building’s systems from a single pane of glass.
Smart or intelligent buildings are adaptable and resilient due to their advanced, user-friendly, and robust technology: improving employee experiences. Once employees understand smart building capabilities, 93% agreed smart buildings need to be more than energy efficient; those polled also believe buildings need to be more secure — keeping the people in them safe by leveraging smart building technology.
Learn more about how Critical Event Management solutions can not only decrease costs and reduce energy use but also increase the safety, security, and comfort of building occupants in this report from independent research firm, Verdantix.
Myth Two: Employees Don’t Care about Smart Buildings
Many building owners believe employees don’t care about smart buildings or smart cities. They may be right given employees’ belief that smart buildings only reduce energy use. However, employees are very much concerned about safety. According to a 2021 survey, employees rated personal safety as one of their top requirements for the workplace.
Once employees learn about smart building safety capabilities, they have an overwhelmingly positive reaction. 89% of U.S. employees agreed that smart building technology can improve building safety and security, making them feel safer. 93% agreed that smart building technology can better respond to serious events like cyber and physical attacks.
Smart buildings also make employees feel better about their work. 92% of U.S. employees agreed that working in a smart building would make them feel better about their employer, and 81% would be more satisfied with their job.
Outside of work, smart buildings and smart cities make people feel more confident too. 90% of those polled said they would be more likely to enjoy leisure activities like concerts, sporting events, and shopping malls in smart buildings. 88% said they’d be more likely to travel if they knew the airport was a smart building.
Smart buildings are an important reassurance for employees anxious about returning to work during the COVID-19 pandemic. 89% of U.S. employees agreed that they would feel better returning to work if their employer was located in a smart building that monitored the environment in the building and capped capacity to prevent overcrowding.
So how can buildings deliver what employees want? By adding CEM technology to smart buildings, building owners can tie multiple legacy systems together: providing managers with a single integrated view of internal and external threats. They can also improve the building’s cost efficiency while providing a safe, secure, and comfortable environment for the building’s occupants.
Employers are paying attention, especially as they struggle to attract or hold on to talent during 2021’s Great Resignation. In a 2021 global survey by independent research firm Verdantix, 50% of firms said they were planning on investing in software solutions that support a better employee workplace experience in the next two years.
Providing an environment where people feel safe is crucial to the success of any organization. Smart buildings with CEM systems can provide the secure, comfortable experience that employees want, and employers need to attract them.
Learn more about how Critical Event Management solutions can improve employee experience in this report from independent research firm, Verdantix.
Myth Three: Hackers Love Smart Buildings
There is an element of truth to this myth as smart buildings are full of technology hackers like to take advantage of. But not all smart buildings and smart building technology are vulnerable to hackers. In fact, buildings that incorporate CEM systems into their cyber-security defenses are much less vulnerable to cyber-attacks.
Over the last ten years, businesses have experienced increased disruptions, particularly from severe weather, civil unrest, physical threats, and pandemics. However, the greatest increase in disruption is cyber-attacks – and the threat level continues to increase.
According to Forbes, cyber-attacks on companies, governments, and individuals broke all records in 2020. And, in a September 22nd, 2021 statement before Congress, the FBI warned about increases in almost every kind of threat, from cyber-attacks to domestic terrorism to international threats.
Shifting to remote work because of COVID-19 has played a part in this rise, but another significant factor has been the accelerated adoption of wireless Internet of Things (IoT) solutions in the workplace. These solutions have brought many benefits to facility managers, owners, and occupants, ranging from enhanced building automation, safety, and security. However, if not properly protected, IoT devices can be hacked and used as a gateway into more critical systems.
Just because a building uses smart devices doesn’t mean it’s smart. Adding multiple smart devices without securing them is the exact opposite of what a smart building should do, though it is not uncommon.
For example, a building with thousands of sensors managed according to comprehensive cyber security protocols is less vulnerable to hackers than a building with only a handful of unsecured sensors. Smart buildings with CEM achieve a higher level of integration and communication, helping them ensure resilience against cyber-attacks and other threats at a higher level. With a CEM system, smart buildings can quickly identify threats, protect their employees and assets, take immediate control of their systems, mitigate damage, and return to full operational capacity.
Companies that recognize the growing threat level are investing in protecting their people and assets. According to Ms. Trinquet, “Verdantix just interviewed nearly 300 organizations globally and we heard that enhancing cybersecurity risk management was a high priority. And so really the big key takeaway for companies is the need to have a holistic view into building security across both physical and cyber risks in one lens to really ensure business continuity.”
The number of cyber threats is undoubtedly on the rise, and hackers are finding ways to exploit new vulnerabilities every day. Buildings with multiple smart devices may be vulnerable, but smart buildings with CEM can help keep physical and digital assets secure and people safe.
Learn more about how Critical Event Management solutions can create safer, more resilient buildings in this report from independent research firm, Verdantix.
Myth Four: Smart Buildings are Just Larger Smart Homes
With the introduction of wireless connectivity, companies like Apple, Google, and Amazon, have launched sophisticated consumer platforms allowing homeowners to upgrade to a smart home by adding IoT devices. But is smart home technology the same as smart building technology?
In a smart home, homeowners use technology to control heat, light, and air conditioning while increasing energy efficiency. Advancements in smart home technology include security features like cameras, alarms, audio/video capabilities, and the ability to control physical features like window shutters and blinds.
As useful as these home devices are, smart building technology does much more. A smart building with CEM uses hundreds (if not thousands) of wireless devices from different vendors that hyper-connect and cooperate under a single platform.
Where a smart home may have a ‘smart lock’ or a ‘smart doorbell’ with one intercom, smart buildings are likely to have multiple physical access control systems, each with an intercom system, all integrated into a central building access control point.
While the technologies used in smart homes and smart buildings may be different, all systems used require integration for optimum performance. Smart buildings, particularly those with a Critical Event Management system, give building managers a common platform to monitor internal environments: creating a healthy space for their occupants by preventing or responding to cyber and physical attacks, reducing long-term building operating costs, and allowing tenants to interact with systems to get hyper-local information about their building.
With such vibrant features, occupants and employees appreciate the benefits of smart buildings. After learning more about intelligent building features and technology, 87% of U.S. employees surveyed said that it was important to work in a smart building.
Smart buildings and smart homes both have features that improve the comfort of their occupants. Smart buildings with CEM go far beyond those features to provide their occupants with a safe, secure, comfortable, and engaging place in which to work.
Myth Five: Old Buildings Can’t be Smart Buildings
Today’s modern smart buildings, particularly those with Critical Event Management, use advanced technology to create a safe and comfortable environment for companies and their employees. Before wireless networks became available, it was generally too expensive to upgrade an older building to a smart building. Older smart devices needed to be physically connected, which was costly and time-consuming.
But now, using wireless IoT solutions, historic buildings can be upgraded to smart buildings. And, because older buildings tend to be far less integrated and automated, installing smart building technology can substantially reduce short-term operating costs. In the long-term, upgrading an older building will reduce maintenance costs, improve occupant safety and security, and increase employee satisfaction.
By introducing a CEM platform that works in addition to individual smart building technology, managers can extend the useful life of existing systems. CEM can also help older buildings upgrade to digital by connecting existing devices, sensors, and systems to the cloud wirelessly.
Advanced CEM systems may include benefits such as:
- Single pane of glass views that integrate building safety and security systems together into one unified control system
- Software applications that allow managers to analyze and enhance legacy systems
- Remote monitoring and control capabilities for building systems
- Tools to manage risks across physical and digital environments
- Automated workflows and communication during critical events
With new IoT technology, building owners can cost-effectively upgrade older buildings to be more efficient, secure, and comfortable for their occupants without the financial and environmental cost of tearing them down.
Learn more about how Internet of Things devices can help older buildings to upgrade to smart buildings in this report from independent research firm, Verdantix.
Smart buildings have a long history of improving energy efficiency while reducing operating costs. However, smart buildings with a Critical Event Management system can do even more than their predecessors. With CEM, smart buildings can help organizations anticipate, detect, manage, and mitigate the impact of potentially devastating events, keeping people and assets safe, and companies operating effectively. | <urn:uuid:f7b0894e-7970-4734-b50c-8aaf83161aa0> | CC-MAIN-2022-40 | https://www.everbridge.com/blog/five-myths-about-smart-buildings/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00778.warc.gz | en | 0.957662 | 2,535 | 2.703125 | 3 |
Cyber security, computer security, or information technology security relates to the protection of computer systems. It deals with protecting digital devices from theft or any kind of damage to electronic data. Cyber security is becoming one of the major challenges in this contemporary world due to increasing reliance on smart devices that constitute the “Internet”, such as Bluetooth, Wi-Fi, and other internet wireless network standards. Hence, there comes a need for digital evidence collection in cyber security.
In today’s time, every person who is using the internet has to face alarming cyber risks. The risk is considered at a higher pace if there is no verified online security at your workstation or home. Life of the people is really not safe as they blindly rely on this internet world in different aspects such as shopping email, messaging, social media, etc.
What is Electronic Evidence in Cyber Forensics?
Digital evidence is the kind of information in binary form which is mainly associated with e-crimes. During cybercrimes, it is the information that is derived from digital devices to get the other pieces of evidence regarding the crime.
Since computers and mobile phones are mainly used to commit crimes. Hence, one’s mobile phone files or system data can reveal a lot about the intention and workflows of that person. So, the law enforcement agencies started to do a forensics investigation of the suspect’s digital devices to investigate the crime scene. Doing this will help them to implement digital evidence collection in cyber security. Furthermore, to carry out an in-depth investigation for the gathered crucial information, users can make the best use of computer forensics tools.
Digital forensics is the process of identifying the digital evidence which is further used by the court of law. It is a science of finding the digital evidence within a process to analyze, inspect, identify and preserve digital evidence associated with electronic devices. It provides the best techniques and tools for the forensic team to resolve complicated digital cases.
Digital Evidence Collection in Cyber Security – Challenges Faced
Here are some of the major challenges that could be faced by the forensics examiner while collecting the evidence:
- No. of PC and extensive use of internet access can increase the difficulty during the investigation process.
- Tools and software to trace the hacking are not easily available.
- Lack of physical evidence can make the prosecution process difficult.
- Large storage space in Terabytes can make the examination process vast and difficult.
- Must be adaptive to the present situation. For instance, any changes in the technology may lead to up-gradation of certain techniques.
Digital Forensics Process Model
The process of digital forensics entails the following steps to gather or handle digital evidence:
- Reporting and Documentation
Let’s Discuss Each One of Them in Detail for Digital Evidence Collection in Cyber Security
- Recognize the purpose of the investigation.
- Identify and collect the resources required in the investigation.
- Isolate the data files or devices for examination.
- Secure the files to inspect data.
- Preservation of data for investigation.
- Identify the techniques and tools to use to analyze digital evidence.
- Process data and interpret the possible results.
4. Reporting and Documentation
- Prepare documentation of the complete crime scene.
- Get a conclusion with the help of gathered facts.
- Summarization and explanation of the process.
Importance of Evidence Gained During the Investigation
In the digital forensic examination process, the most important competency for the one who conducts the investigation is to gather and examine different types of evidence. Several types of evidence can be gathered from digital devices that can help the investigator to make wise decisions during the case.
First Rule: If the evidence is not related to the case, then it is not relevant evidence. It must be appropriate to the investigation for the admissibility of court.
There are many types of evidence that are not admissible to court, but they are valuable for the investigation to reach a conclusion. Some artifacts are even not admissible in their own way, but they may be admissible in conjunction with other evidence.
Advantages of Digital Evidence Forensics in Cyber Security
Below mentioned are some advantages of Digital Evidence Collection in Cyber Security:
- It ensures the integrity of computer systems and other digital devices.
- When producing this evidence in court, the culprit will be punished.
- If systems & networks are compromised in an organization then it can be helpful in capturing important information.
- It helps to track down cybercriminals across the world, efficiently.
- Extract, process, and interpret the evidence in the court, so that it proves the action of the criminal.
For digital evidence collection in cyber security, the investigator needs to follow a proper procedure that helps to capture the perpetrator. Understanding this blog leads to efficiently recognizing the crime scenario by following the different stages which are incorporated in the digital forensics collection process. As a result, the gathered evidence is admissible in a court of law. | <urn:uuid:2e0adcb4-50bd-4962-943a-b02dfb1897dd> | CC-MAIN-2022-40 | https://www.mailxaminer.com/blog/digital-evidence-collection-in-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00778.warc.gz | en | 0.906758 | 1,059 | 3.3125 | 3 |
Earlier this year Google started working with Autism advocacy group Autism Speaks on a project called Mssng (Pronounced “missing,” to symbolize the missing knowledge surrounding the disorder.) Previously known as The Autism Speaks Ten Thousand Genomes Program (AUT10K,) it is an open source research platform for autism that aims to collect and study the DNA of 10,000 families that have been affected by autism. The goal is to create the world’s largest database of sequenced genomic information of Autism run on Google’s cloud-based genome database, Google Genomics. Though the pair have been working together on this since June of this year, they have recently announced a launch that will allow worldwide access to autism research for scientists.
Over 1,000 genomes have already been sequenced with 2,000 more pending. The results from the first 100 have been published in the American Journal of Human Genetics in July 2013.
On the official press release it was stated, “Once completed, this historic program could lead to uncovering various forms of autisms, like the various forms of cancers today. This in turn could lead to individualized treatments and therapies for those with autism”
This project has already sparked some strong reactions. One blog post by Amy Sequenzia on the Autism Womens Network was titled Is Autism Speaks a Hate Group? From the get go it makes some strong statements. It reads “I know they are hateful. I know they don’t like Autistics. I know they use most of their resources to convince people that the world would be a better place without us, and actually using the money to fund research that can make our extinction possible. Disguised as “research” about babies and siblings, the investments seek to find a genetic marker that will allow parents to terminate an unwanted pregnancy”
This is not the first time concerns were raised against Autism Speaks. A few months ago voices were raised against a partnership between Autism Speaks and Sesame Street. Lei Wiley-Mydske Said, “I am an autistic parent to an autistic child. I grew up watching [Sesame Street], and my son has too. I encourage you to end any partnership with Autism Speaks if you wish to truly celebrate diversity. I would be THRILLED to see autism portrayed on your show, but not when the message is one of despair and fear. Or one that would pretend that disability is shameful. In developing this project, please remember that autism is not something we do to our families, or our communities or anyone else. Autism is how my brain works and interprets the world around me. Our stories deserve to be told with US doing the telling. Not with someone else speaking for us. Again. Please don’t allow a group like Autism Speaks continue to dominate the conversations about autism. They do us harm. You do us harm when you partner with a hate group who wants to prevent people like my son and I from existing and spreads harmful and dehumanizing rhetoric about us.” There is even a ‘Boycott Autism Speaks’ Facebook page.
According to the American Psychological Association website “autism involves impairments in social interaction — such as being aware of other people’s feelings — and verbal and nonverbal communication. Some people with autism have limited interests, strange eating or sleeping behaviors or a tendency to do things to hurt themselves, such as banging their heads or biting their hands.” The symptoms of people with ASD fall on a continuum, with some individuals showing mild symptoms and others having much more severe symptoms. The spectrum contains subgroups of Asperger’s syndrome, pervasive developmental delay (PDD) and autistic disorder. Autism is said to currently affect one in 68 individuals in the U.S., and one in 42 boys.
The Autism Spectrum Disorder (ASD) has had increasing exposure within the mainstream media over the past few years which give a much broader perspective into the disorder than the hit film “Rainman”. Some examples include the 2009 movie “Adam,” where the lead character of the same name who has Asperger’s, navigates a relationship with a neighbor. There are also two lead characters in the hit TV show “Parenthood” who are on different levels of functionality of the disorder. The best-selling novel turned Broadway show, “The Curious Incident of a Dog in the Nighttime” that features a teenager who is described as having Asperger syndrome/ high-functioning autism. This play is what comedian Jerry Seinfeld attributes for identifying his autistic tendencies. In an interview he said. “I think, on a very drawn-out scale, I think I’m on the spectrum…Basic social engagement is really a struggle. I’m very literal, when people talk to me and they use expressions, sometimes I don’t know what they’re saying. But I don’t see it as dysfunctional, I just think of it as an alternate mindset.”
Autism Speaks, was founded by former vice chairman of General Electric, NBC and NBC Universal, Bob Wright with his wife Suzanne after one of their grandchildren was diagnosed with autism.
Dr. Stephen Scherer, a world-renowned geneticist is the director of MSSNG. He has previously launched the Database of Genomic Variants, the world’s first and most-used database of copy number variants (CNVs). This has enabled numerous medical geneticists and physicians to making hundreds of thousands of medical diagnoses. The hope for MSSNG is to lead to breakthroughs into the causes, subtypes and better diagnosis and treatment for the disorder Autism spectrum disorders (ASD). | <urn:uuid:aeff7158-5d38-4e22-a0a4-a6582ff33135> | CC-MAIN-2022-40 | https://cloudwedge.com/news/4891-mssng-project-googles-partnership-with-controversial-autism-speaks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00778.warc.gz | en | 0.964973 | 1,185 | 2.671875 | 3 |
People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.
The Black Box of AI
How can decisions be trusted when people don’t know where they come from? This is referred to as the black box of AI—something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.
Medical practitioners are thought to be among the first who will benefit greatly from AI and deep learning technology, which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.
Key thinkers warn that algorithms may reinforce programmers’ prejudice and bias, but IBM has a different view.
IBM claims to have made strides in breaking open the block box of AI with a software service that brings AI transparency.
Making AI Algorithms More Transparent
IBM is attempting to provide insight into how AI makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.
IBM previously deployed an AI to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.
Experts were quick to mistrust the model as it didn’t explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.
But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.
Open-Source and Ethical AI
It’s important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance.
Alongside the announcement of this AI, IBM Research also released an open-source AI bias detection and mitigation toolkit, bringing forward tools and resources to encourage global collaboration around addressing bias in AI.
This includes a collection of libraries, algorithms, and tutorials that will give academics, researchers, and data scientists the tools and resources they need to integrate bias detection into their machine learning models.
While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit claims to check for and mitigate bias in AI models.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”— David Kenny, IBM’s SVP of Cognitive Solutions.
What could this mean for medical practitioners? The new technology may open an array of problems with its implementation as policy still has to catch up with tech. Who is liable for issues following a wrong diagnosis: the doctor or the AI? After a proven track-record of correct diagnosis, how does a person go against the software? How is a gut feeling justified? | <urn:uuid:76334135-85a6-419d-bf38-f9f388710c1e> | CC-MAIN-2022-40 | https://www.iotforall.com/artificial-intelligence-can-explain-own-decision-making | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00778.warc.gz | en | 0.948468 | 737 | 3.515625 | 4 |
NASA is asking industry for ideas on how to build satellite technology for advanced communications, imaging and robotic science exploration as part of the agency’s Journey to Mars project.
The agency said Friday it aims to launch a Mars orbiter sometime in the 2020s that will work to facilitate bandwidth communications, high-resolution imaging and radio-frequency data transmission.
“Currently, we depend on our orbiting science missions to perform dual service in making measurements and acting as communication relays, but we can't depend on them to last forever,” said John Grunsfeld, a NASA astronaut and associate administrator of the agency’s science mission directorate.
“Our success in exploring Mars, to unravel the mysteries of the Red Planet, depends on having high bandwidth communication with Earth and overhead imaging,” Grunsfeld added.
NASA’ Jet Propulsion Laboratory oversees pre-formulating planning for the initiative and plans to award up to $400,000 for each concept study that a chosen contractor will perform over a four-month period.
The agency said it is examining how to implement the Mars orbiter mission in partnership with international organizations. | <urn:uuid:9c9f0ac4-a619-4046-9d92-08127f2feff9> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2016/04/nasa-issues-solicitation-for-mars-orbiter-design-ideas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00778.warc.gz | en | 0.904085 | 235 | 2.984375 | 3 |
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Why do we sleep? One obvious reason is to restore the strength of our bodies and limbs. But another very important role of sleep is to consolidate memories and organize all the information that your brain has ingested while being awake. People who lack proper sleep see their cognitive abilities degrade and their memories fail.
The wonders and mysteries of sleep remain an active area of research. But aside from medicine, psychology, and neuroscience, the study of sleep serves other fields of science. Artificial intelligence researchers are also looking into the work done in the field to develop AI models that are more efficient at handling data over longer timespans.
Recent work by AI researchers at DeepMind show leverages the study of the brain and the mechanisms of sleep, to tackle with one of the fundamental challenges of natural language processing (NLP): dealing with long-term memory.
AI’s struggling language memory
The human brain has a very fascinating way to organize memory. We can manage different lines of thought over a long period. Consider this hypothetical example: You wake up in the morning and spend 45 minutes to read a book about cognitive science. An hour later, you skim over the news and read a few news articles. In the afternoon, you resume your study of a new AI research paper, which you had started a few days ago, and take notes for a future article. During your routine daily exercise, you listen to a science podcast or an audiobook. And at night, before going to sleep, you open a fantasy novel and pick up where you had left off the previous night.
You don’t need to be a genius to do this (this is roughly what I do every day, and I don’t claim to be smarter than the average person). In fact, most of us handle an even more diverse set of information every day. And what’s interesting is that not only our brain is able to preserve and manage these buckets of information, it can do so over a long period, day, weeks, months, and even years.
In recent years, AI algorithms have gradually become better at maintaining consistency over lengthier streams of data, but they still have a long way to go before they can match the skills of the human brain.
The classic type of machine learning construct used for handling language is the recurrent neural network (RNN), a type of artificial neural network designed to deal with temporal consistency in data. An RNN trained on a corpus of data—say a large dataset of Wikipedia articles—can perform tasks such as predicting the next word in a sequence or finding the answer to a question (given the question is directly answered in the training data).
The problem with the earlier version of RNNs was the amount of memory they needed to handle information. The longer the sequence of data the AI model could process, the more memory they needed. This limit was mainly because, unlike the human brain, neural networks don’t know which parts of data they should keep and which parts they can discard.
Extracting the important bits of information
Consider this: When you read a novel, say The Lord of the Rings, your brain doesn’t memorize all the words and sentences. It is optimized to extract meaningful information from the story, including the characters (e.g., Frodo, Gandalf, Sauron), their relations (e.g., Boromir is Frodo’s friend—almost), locations (e.g., Rivendell, Mordor, Rohan), objects (e.g., The One Ring, Anduril), key events (e.g., Frodo throws the One Ring in the heart of Mount Doom, Gandalf falls into the pit of Khazad Dum, the battle of Helm’s Deep), and maybe a few very important bits of dialog in the story (e.g., not all that glitters is gold, not all those who wander are lost).
This small amount information is very crucial to being able to follow the story’s plotline across all four books (The Hobbit and all three volumes of The Lord of the Rings) and 576,459 words.
AI scientists and researchers have been trying to find ways to embed neural networks with the same kind of efficient information handling. One great achievement in the field has been the development of “attention” mechanisms, which enable neural networks to find and focus on the more important parts of data. Attention has enabled neural networks to handle larger amounts of information in a more memory-efficient manner.
Transformers, a type of neural network that has become increasingly popular in recent years, has put the intention mechanism to efficient use, allowing AI researchers to create larger and larger language models. Examples include OpenAI’s GPT-2 text generator, trained on 40 gigabytes of text, Google’s Meena chatbot, trained on a 341-gigabyte corpus, and AI2’s Aristo, a deep learning algorithm trained on 300 gigabytes of data to answer science questions.
All these language models manifest remarkable consistency over longer sequences of text than previous AI algorithms. GPT-2 can frequently (but not always) spit out a fairly coherent text that spans across several paragraphs. Meena has not been released yet, but the sample data that Google has made available show interesting results in conversations that go beyond simple queries. Aristo outperforms other AI models at answering science questions (though it can only answer multiple-choice questions).
However, what’s obvious is that language-processing AI still has room for a lot of improvement. For the moment, there’s still a drive to improve the field by creating larger neural networks and feeding them even bigger and bigger datasets. Clearly, our brains don’t need—and don’t even have the capacity for—hundreds of gigabytes worth of data to learn the basics of language.
Drawing inspiration from sleep
When memories are created in our minds, they start as a jumble of sensory and cognitive activities encoded across different parts of the brain. This is the short-term memory. According to neuroscience research, the hippocampus collects activation information from neurons in different parts of the brain and records it in a way that becomes accessible memory. It also stores the cues that will reactivate those memories (a name, smell, sound, sight, etc.). The more a memory is activated, the stronger it becomes.
Now here’s where sleep comes into play. According to Marc Dingman, the author of Your Brain Explained (I strongly recommend reading it), “Studies have found that the same neurons that are turned on during an initial experience are reactivated during deep sleep. This has led neuroscientists to hypothesize that, during sleep, our brains are working to make sure important memories from the previous day get transferred into long-term storage.”
DeepMind’s AI researchers have taken inspiration from sleep to create the Compressive Transformer, a language model that is better suited for long-range memory. “Sleep is known to be crucial for memory, and it’s thought that sleep serves to compress and consolidate memories, thereby improving reasoning abilities for memory tasks. In the Compressive Transformer, granular memories akin to episodic memories are collected online as the model passes over a sequence of inputs; over time, they are eventually compacted,” the researchers write in a blog post that accompanies the full paper of the Compressive Transformer.
Like other variations of the Transformer, the Compressive Transformer uses the attention mechanism to choose relevant bits of data in a sequence. But instead of discarding old memories, the AI model removes the irrelevant parts and combines the rest by keeping the salient bits and storing them in a compressed memory location.
According to DeepMind, the Compressive Transformer shows state-of-the-art performance on popular natural language AI benchmarks. “We also show it can be used effectively to model speech, handles rare words especially well, and can be used within a reinforcement learning agent to solve a memory task,” the AI researchers write.
What’s relevant, however, is that the AI improves performance on the modeling of long text. “The model’s conditional samples can be used to write book-like extracts,” DeepMind’s researchers write.
The blog post and paper contain samples of the Compressive Transformer’s outputs, which is pretty impressive in comparison to other work that is being done in the field.
Language has not been solved yet
Compression is not the same thing as filing away important components. Let’s go back to the Lord of the Rings example to see what this means. For instance, after reading the chapter where the fellowship holds a council at Elrond’s house, you don’t necessarily remember every word of the exchange between the attendants. But you remember one important thing: While everyone is quarreling about how to decide the fate of the One Ring, Frodo steps forth and accepts responsibility to throw it in Mount Doom. Therefore, to compress information, the mind seems to transform it when storing away memories. And that transformation continues as memories become older.
Clearly, there is some sort of pattern recognition that enables the Compressive Transformer to find the relevant parts that should be stored in the compressed memory segment. But it remains to be seen whether these bits of data are equivalent to the elements mentioned in the example earlier above.
The challenges of using deep learning algorithm to process human language have been well documented. While statistical approaches can find interesting correlations and patterns in large corpora of data, they can’t perform some of the subtler tasks that require knowledge beyond what is contained in the text. Things like abstraction, commonsense, background knowledge and other aspects of intelligence that allow us to fill the blanks and extract the implicit meanings behind words remain unsolved with current approaches in AI.
As computer scientist Melanie Mitchell explains in her book Artificial Intelligence: A Guide for Thinking Humans, “It seems to me to be extremely unlikely that machines could ever reach the level of humans on translation, reading comprehension, and the like by learning exclusively from online data, with essentially no real understanding of the language they process. Language relies on commonsense knowledge and understanding of the world.”
Adding those elements will enable AI models to deal with the uncertainties of language. Cognitive scientist Gary Marcus told me last year, “Except for a few small sentences, almost every sentence you hear is original. You don’t have any data directly on it. And that means you have a problem that is about inference and understanding. The techniques that are good for categorizing things, putting them into bins that you already know, simply aren’t appropriate for that. Understanding language is about connecting what you already know about the world with what other people are trying to do with the words they say.”
In Rebooting AI, Marcus and his co-author, New York University professor Ernest Davis, write, “Statistics are no substitute for real-world understanding. The problem is not just that there is a random error here and there, it is that there is a fundamental mismatch between the kind of statistical analysis that suffice for translation and the cognitive model construction that would be required if systems were to actually comprehend what they are trying to read.”
But compression might help us find new directions in AI and language modeling research. “Models which are able to capture relevant correlations across days, months, or years’ worth of experience are on the horizon. We believe the route to more powerful reasoning over time will emerge from better selective attention of the past, and more effective mechanisms to compress it,” the AI researchers at DeepMind write. | <urn:uuid:8fcfce4d-3740-4fbb-9aae-545c320ba797> | CC-MAIN-2022-40 | https://bdtechtalks.com/2020/02/17/deepmind-compressive-transformer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00778.warc.gz | en | 0.941247 | 2,445 | 3.40625 | 3 |
In today’s world, information systems are incredibly interconnected, but this comes with a price. Because most organizations conduct some portion of their business in cyberspace, they open themselves up to a new level of risk. Who they are, what they do, and what information they possess can make businesses targets for malicious attackers. Reputation damage, disruption of business operations, fines, litigation, and loss of business can all be consequences of a cybersecurity attack. It’s more important than ever to demonstrate the extent and effectiveness of your organization’s cybersecurity risk management program.
SOC for Cybersecurity is a reporting framework for cybersecurity risk management processes and policies. It is designed to help strengthen cybersecurity programs. Introduced and designed by AICPA, it’s a solution delivered by independent CPAs to review, examine, and provide feedback on existing cybersecurity controls and systems.
What is a Cybersecurity Risk Management Program?
It is the organization’s collection of processes, policies, and controls that are put in place by management in order to protect against cybersecurity threats. It includes the systems used (apps, networks, devices, etc.), the information stored on and transmitted between systems, and user habits.
A cybersecurity risk management program is designed to protect your company’s systems and information from security threats and incidents. It’s responsible for helping the cybersecurity team detect, investigate, react to, mitigate, and ensure recovery from security events. It also includes measures that are put in place to detect and stop attacks before they are able to cause any damage.
About SOC for Cybersecurity
SOC (System and Organizations Control) for Cybersecurity is an investigation and examination process by which a cybersecurity’s cyber risk management program can be measured, inspected, and reported on. As such, it’s designed to better communicate the effectiveness of existing IT security risk management efforts to analysts, directors, regulators, and other relevant parties. To put things simply, it acts as a “gut check” for an organization’s cybersecurity practices.
As a reporting process, SOC for Cybersecurity is designed to examine three elements in particular:
Management’s Description: Prepared by management, this is a description of the overall scope of the cybersecurity management risk program. It includes major cybersecurity policies, processes, and procedures, ways that the organization manages these risks, and the methodology used to determine which systems and data are sensitive. This is designed to provide a context to better understand the cybersecurity risk management program.
- Management’s Assertion: The management makes an assertion of how effective the cybersecurity risk management program is, as well as what the cybersecurity objectives of the organization are, and whether they are met or not.
- Practitioner’s Opinion: This is like the Management’s Assertion, but from the perspective of the CPA, on whether existing controls work in reaching the stated cybersecurity objectives.
It is important to note that the SOC for Cybersecurity doesn’t detail the controls, how they are tested, or the results of tests. Instead, it is meant for general use to show whether they are effective or not. Similarly, it does not focus on compliance with regulations and laws or privacy and processing integrity criteria. It does however, examine the effectiveness of controls that do contribute to those topics.
Do you need the SOC for Cybersecurity examination?
Digital security threats are on the rise and organizations of all sizes and in all industries need to ensure effective cybersecurity. While other IT security examinations focus more closely on the security controls, testing methodology, and so on, SOC for Cybersecurity is meant to highlight the effectiveness of existing controls for the benefit of a more general audience. As such, it’s a way of informing management, board members, analysts, investors, clients, and regulators on the health of your cybersecurity systems.
It can also be used to highlight the strengths of existing cybersecurity systems, show potential vulnerabilities, and determine what is needed to correct them. Cybersecurity teams can better make a case for new solutions and technology that they need to further improve their cybersecurity.
The Benefits of SOC for Cybersecurity
SOC for Cybersecurity examinations create a framework and language for understanding cybersecurity issues that is common to both the IT team and those that they work with. Here are some more benefits:
- Improve communication between the IT and management: The SOC for Cybersecurity provides common ground for communicating on cybersecurity issues and controls so improvements and changes can be better implemented in the future.
- Proof of cybersecurity efforts: Like other certifications and examinations, SOC for Cybersecurity demonstrates the efforts of the IT team to create a safer security scope. Furthermore, this can help improve reputation and trust from customers, clients, and partners.
- Ensure improvements to your cybersecurity practices: During the SOC examination, you may discover issues that could lead to data breaches. Addressing these issues will improve your security.
- Help your customers and clients feel safe: The results of SOC for Cybersecurity can help your customers rest easy about the fact their data is in good hands.
How to get a SOC for Cybersecurity
A SOC for Cybersecurity examination is carried out in cooperation with an independent CPA approved by the AICPA to carry out the examination. It’s important to find a certified, qualified auditor. Besides carrying out the examination itself, they are able to help you better understand the scope and details of what the description and assertions include.
Businesses with an extensive cybersecurity scope, multiple devices, networks, apps, and SaaS solutions may have need for an SOC for Cybersecurity examination. It can help show the importance of investing further IT security teams and the tools they used to secure the organization.
BitLyft is a SIEM provider and offers competitive pricing. We have a world-class team of professionals who will guard your date and ward off cyberattacks before they can cause damage to your infrastructure.
Our services aim to provide you with a simple no-nonsense solution to keep your business safe from online threats. If you’d like to learn more, don’t hesitate to get in touch with us today to speak to one of our friendly representatives.
You can also Request a Free Assessment.
We’ll help explain the services we offer and how they can be customized to your exact needs. | <urn:uuid:4aeb0b64-e3ae-4983-92bd-917e0ad2e1f1> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/soc-for-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00778.warc.gz | en | 0.94273 | 1,302 | 2.765625 | 3 |
- Sound is the vibration of something within air that causes waves, or pulses, of that air that can be measured by the number of waves per second.
- This is measured in Hertz, which is one wave per second.
- The human theoretical limits for hearing are 20 hertz (20 waves per second), and 20khz (20,000 waves).
- There isn’t much musical information beyond 10-15Khz.
- Most humans can’t hear much beyond 15Khz anyway, although our theoretical limit is 20Khz.
- When you listen to a live band you’re hearing analog sound, meaning the mechanics of your ear are being directly influenced by the air coming off of instruments and vocal cords.
- When you listen through technology there’s a necessary analog to digital conversion that takes place, because computers can only do 1’s and 0’s.
- The trick is that when you go from analog to digital, you have many options.
- The main two options are how often you take a sample from the analog (real) thing, and how much dynamic range is possible within each sample.
- CD Quality is known as 16/44, which is shorthand for 16-bit samples taken at 44Khz, meaning the samples are taken 44.1 thousand times per second. This level is considered very high quality compared to most music streaming services from before 2018 or so, because most of them were encoding down to much lower quality to avoid buffering over mobile internet connections.
- A very super-high quality file or stream sits around 24/96, which can be found in offerings like MQA on Tidal.
- The extremes here are quality levels like 32/192 and beyond.
- You need to sample at twice the rate of the frequency, so if you want to sample 20Khz you need to sample at 40Khz or higher, hence 44.1 or 48 Khz.
- Ultimately these are resulting in a dynamic range, which is how quiet and loud something is, and specifically what the distance is between the quietest and loudest sound in a file.
- A 16-bit recording has a dynamic range of 96 decibels (DB), and a 24-bit file has a dynamic range of 144DB.
- Contrary to what many believe, a higher bit-rate than 16-bits, or a higher sample rate than 44.1Khz does not give you better sound by itself. Those numbers already exceed the capabilities of human hearing, so going beyond them does nothing for the sound quality during playback.
- There are production reasons for tracking and mixing at a 24 or 32 bitrate, however, which basically come down to giving yourself room to make mistakes.
Human ear sensitivity is 10-12 w/m2
- Decibels are measures of loudness, and they’re logarithmic, meaning they scale by the exponent, hence DECI-bel (10).
- Humans can hear between 0 and infinite decibels, but the scale is extreme.
- A whisper is 40 db, normal voice is around 60, a playground is around 80, 90 is where you start damaging your hearing, a loud concert might be around 100, a plane taking off is around 130, and you can evidently kill someone with around 160 to 180 decibels.
- There is tons of confusion in comparing audio sources, streaming Music sources, etc., and many of the different file types.
- Encoding is the process of converting from one format to another, including from analog to digital.
- Whenever you encode you have the possibility of losing data, and this is especially true if you’re trying to make a smaller file.
- If you’re trying to go from CD quality (16/44) to a smaller file—for example because of limited bandwidth—that is why we came up with encoding like mp3, which has different levels of quality.
- Low-quality mp3 files are very small, but sound worse because they strip away data from the original
- As you get higher and higher quality of mp3, such as 256 and 320, you end up with less audio quality reduction, but larger file sizes
- The highest quality comes from super high-quality recordings of the analog experience, which happens in the original studio recordings, and quality there can be as high as 32/192 (and higher). But again, these don’t give you better playback by themselves; they just help with production.
- It’s good to have this level of original to work with, which you can then encode downwards from for various uses, such as streaming.
If you sample at double the frequency you get the crest and the trough, so you can recreate the rest of the wave perfectly. There is no need whatsoever to go higher.
- When you go down from those super high quality levels to 16-bit, you do actually lose data in the various ranges, especially at higher frequencies. More
- The part that is controversial is how much REAL WORLD effect the various encodings have on the actual listening experience.
- This is why there are so many philosophical debates within the music and audiophile communities, and this is also why I wrote this primer.
- The truth is this: there are so many variables in play in the equation of listener experience here are some of them:
- The quality of the original recording
- The quality of the encoding of the file you’re listening to
- The quality of the equipment and environment that you’re listening on
- Your own personal biases and psychological priming that’s currently in effect as you listen
- It’s established science that the human mind can easily be tricked into thinking something is better or worse based on what the person was told beforehand, or what they just experienced right before.
- So if you hear a horrible recording encoded into a tiny mp3 file, and go from that to a halfway decent situation, the jump might sound far more dramatic than a jump from decent to extraordinary.
- The human brain is a major factor in this equation, and it’s too often discounted as an explanation for differences
Analysis and takeaways
- When you are doing comparisons, as a human, between two different musical audio experiences, you have to consider the full stack of variables.
- Are you listening on the same equipment?
- Are you listening on the same app?
- Do those apps have different EQ settings built in that can radically change the sound?
- Do the two apps have different versions of the actual song, i.e., different recordings?
- Is one a different mastering of the song than the other?
- And finally—are you primed psychologically to hear one thing or the other?
- You should always suspect bias in yourself, and look for ways to reduce it
TL;DR: Optimize your entire chain, and be suspicious if you hear someone say that the difference in an experience comes from bit/sample rates above 16/24. It’s probably one or more of the factors above.
- Logarithms take super large or super small numbers and turn them into nice numbers. A log(10) of 100,000 is 5 because 100,000 has 5 zeroes. The log(10) of one billion is 9 because it has nine zeroes. The cool part about this is that you can deal with massive numbers like 100K and 1 billion by working instead with 5 and 9. | <urn:uuid:e8c857fc-8449-43a4-8f73-a6086c5a0529> | CC-MAIN-2022-40 | https://danielmiessler.com/study/audio-quality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00778.warc.gz | en | 0.947724 | 1,571 | 3.921875 | 4 |
The Federal Trade Commission sent a report to Congress detailing limitations artificial intelligence has in regulating disinformation and harmful online content.
The Federal Trade Commission issued a warning regarding the government’s use of artificial intelligence technology to fight disinformation, deepfakes, crime and other online concerns, citing the technology’s inherent limitations with bias and discrimination.
Detailed in a report sent to Congress, officials at the FTC said that AI technology cannot play a neutral role in mitigating social problems online, specifically noting that using it in this capacity could give way to illegal data extraction from online users and conduct improper surveillance.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Director of the FTC’s Bureau of Consumer Protection Samuel Levine. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”
The report specifically highlights the broadly rudimentary level this technology is at, mainly with the datasets AI algorithms run on not being representative enough to successfully identify harmful content.
AI software developers’ biases are also likely to influence the technology’s decision-making, a longstanding issue within the AI industry. FTC authors also added that most AI programs cannot gauge context, further rendering it unreliable in distinguishing harmful content.
“The key conclusion of this report is thus that governments, platforms and others must exercise great caution in either mandating the use of, or over-relying on, these tools even for the important purpose of reducing harms,” the report reads. “Although outside of our scope, this conclusion implies that, if AI is not the answer and if the scale makes meaningful human oversight infeasible, we must look at other ways, regulatory or otherwise, to address the spread of these harms.
Another critical observation the FTC arrived at is that human intervention is still needed to regulate the AI features that may inadvertently target and censor the wrong content. Transparency surrounding how the technology is built, mainly within its algorithmic development, is also highly recommended.
The report also noted that platforms and websites which host the circulation of harmful content should work to slow the spread of illegal topics or misinformation on their end. The FTC recommends instilling tools like downvoting, labeling or other targeting operations that aren’t necessarily AI-run censorship.
“Dealing effectively with online harms requires substantial changes in business models and practices, along with cultural shifts in how people use or abuse online services,” the report concluded. “These changes involve significant time and effort across society and can include, among other things, technological innovation, transparent and accountable use of that technology, meaningful human oversight, global collaboration, digital literacy and appropriate regulation. AI is no magical shortcut.”
The report stems from a 2021 law that asked the FTC to review how AI might be used to fight disinformation and digital crime. FTC Commissioners voted to send the report to Congress upon finalization in a 4-1 decision. | <urn:uuid:291d58d7-5013-44cc-ae50-5bc38979d802> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2022/06/ai-no-magical-shortcut-ftc-says-fighting-disinformation-online/368341/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00778.warc.gz | en | 0.926247 | 623 | 2.515625 | 3 |
Does new technology make us more free or less free?
It’s easy to say “both,” and point to examples on either side. And in fact, some uses of technology support the cause of individual liberty and others work against it.
I believe the biggest threat to our freedom in the long term is when governments or law enforcement agencies grab the exclusive right for themselves to use new technologies.
Here’s how it works. A new technology appears. Police say they can use it but citizens can’t. If this is accepted by the courts and the public, the government now has more power and citizens less.
Over time, the accumulation of these new powers upsets the balance of power between the state and the people, and our freedom is increasingly eroded.
One old example is the use of audio and video recording technology during police interrogations.
A century ago, no such recordings were made. Police interviewed suspects and witnesses. In court, it was the detective’s word against the suspect’s. But new tape recording and later video recording technology enabled a successful power grab by the police.
Police can record interrogations. Suspects cannot. As a result, both honest and abusive police have a technology advantage, and both innocent and guilty suspects have a disadvantage that didn’t used to exist before the technology existed.
Police can present any subset of an interrogation they like, or claim that it wasn’t recorded if the recording doesn’t support their case. The suspect has no such advantage, and is denied the opportunity to gather evidence of police misconduct.
There are much newer examples.
Local police spy and track via cell phone
A groundbreaking story in The New York Times this weekend revealed that hundreds of local police departments in the United States routinely spy on citizens via their cell phones, and track their locations, “with little or no court oversight.”
The article points out that cell phone carriers have set up profitable menus of services to offer to these departments, such as suspects’ locations, the tracing of texts and phone calls and others.
Any citizen requesting similar information would be denied.
It’s a new technology capability, and police are asserting a monopoly on its use. Even more disturbing is that this de facto monopoly on the use of these technologies is being granted not by congress or the courts, but by corporations.
We can also see new technology power grabs coming soon.
The drone wars
The widespread police use of remote-controlled drone aircraft predicted by just about everybody will raise some interesting legal questions. For example, can police use drones to peek into people’s backyards, or does that violate the Constitution’s 4th-Amendment protection against “unreasonable searches and seizures”?
One possibility is that, as with many technologies in the past, law enforcement agencies will be granted the exclusive right to use drone technology along with an exception to (or a new definition of) the 4th Amendment.
Another possibility was raised recently by John Villasenor, a senior fellow at the Brookings Institution in an NPR interview. He points out that a 2001 case established that when “the government uses a device that is not in general public use to explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a search.”
In that case, if drones grow in popularity among consumers to the point where they can be found to be “in general public use” then drone spying on backyards would not be “searches,” and would thereby not be banned by the Constitution.
In either of the cases, the police would retain exclusive use of the information gathered by drones, as well as the knowledge of exact details of the surveillance.
Are we going to accept this?
The bigger question is this: What should a free society do in order to safeguard its freedom as new technologies come online?
Unfortunately, the Constitution doesn’t mention video cameras, cell phones or drones. But it clearly attempts to prevent government exclusivity over and use and control of, say, media technology or gun technology.
Similarly, we need to work hard to prevent such exclusivity over new technologies not imagined by the framers of the Constitution.
For example, I believe suspects should have the right to record police interrogations if the police have that right. Why not?
I believe information about what data was gathered by police from the cell phone company should be available to the targets upon request.
And I believe information about what is being “droned” should be publicly available.
There are plenty of examples where exclusivity has been denied to authorities, and the need for public safety has been balanced successfully against the public’s right to know. For example, live police radio chatter is available to the public using radios, the Internet or even mobile apps. Police are not allowed exclusive use of the airwaves.
We should also be aware that law enforcement agencies will always try to give themselves exclusive use over every useful new technology. The public needs only to do nothing in order to slouch toward a police state.
Note that this phenomenon does not require any malicious intent on the part of the authorities. Police genuinely do and should want to use all tools at their disposal to catch crooks and terrorists.
But as a society, we need a principle of application that whenever government authorities are given permission to use a technology, the public must be given the ability to use that same technology in the other direction — or at least have access to or knowledge of what information has been gathered.
We must resist the provably incorrect assumption that all authorities are innocent and all suspects are guilty. Instead, permission to use any new technology must be based upon reality, in which authorities are capable of abuses and citizens can be wrongly accused.
The best way to approach future technologies as they come on line is to cautiously grant permission for governments and police agencies to use them — but only if citizens can use them, too. | <urn:uuid:64c3d071-3ea4-44fa-99b8-c45a1e4a920a> | CC-MAIN-2022-40 | https://www.datamation.com/security/how-to-stop-cops-from-abusing-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00178.warc.gz | en | 0.947351 | 1,247 | 2.59375 | 3 |
?Consider this: An operator at a water treatment facility presses a button to add a certain measurement of chemicals to untreated water. Instead of doing so, the computer dumps twice the amount of chemicals, an amount way above the maximum safety zone. The resulting excess causes poisonous toxins in the water and when distributed to individual homes, entire communities fall ill. Investigators and the public are left asking, ?How did this occur?? The answer: a computer bug known as a Trojan horse.??From Hardening America?s Public Utilities Against the Threat of Cyberterrorism, by Jason B. Lee and Steven E. Roberts.
Jason Lee and Steven Roberts, risk mitigation and security experts, postulate that the simple Trojan horse hack can result in cyberterrorism. Is this credible or simple fear mongering? CIO went to the Massachusetts Water Resource Authority (MWRA)?s operations center in Chelsea, Mass., to find out.
What we found was a system that would be difficult to hack into and any number of best practices for securing systems against cyberterrorism. Here?s the skinny:First, a hacker would need access to the MWRA computers. We are in a locked room accessible by key card and manned 24/7. To get in, you must check in at the facility?s front desk (and then check out later), offer your credentials, wear a temporary badge and be with an escort at all times. After you leave, your host will send a memo to senior management detailing the visit for the record.
The computers we?re looking at distribute water throughout much of eastern Massachusetts. An hour or so west?near the Wachusett Reservoir?is an identical crescent of computers that monitor water quality and control the chemicals that enter the water, according to Marcus Kempe, director of operations support at the MWRA.
Together, these two banks form the MWRA?s Scada system. Scada (pronounced ?scay-da?) stands for supervisory control and data acquisition; most public utilities rely on a highly customized Scada system. No two are the same, so hacking them requires specific knowledge?in this case, knowledge of the MWRA?s design and access to that customized software.
Scada is not networked, except in two places. One, a dial-up modem, is offline. Only one person has clearance to use it. Turning it on must be done manually by someone with clearance at the facility. And two, there is a link to the MWRA?s general IT infrastructure through a program called Plant Information (PI). PI gives a small set of supervisors with the highest clearance a one-way view of data about the water system. They can look, but they can?t touch. This data can also be piped into a war room around the corner from us in the operations center, which is used for incident response.
If a hacker somehow got into Scada, he would need user names and passwords to gain control of the command and control computers; he would need a way to either make changes undetected?though someone is watching the system around the clock?or hide the fact that he is making changes. And he would need to work fast: Systems lock after a few minutes of inactivity and can?t be reactivated without a password.
Scada connects through a private line (soon, via microwave) to Programmable Logic Controllers, or PLCs, at the water facilities, which churn 250 million gallons of water per day from the reservoir to faucets. PLCs are dumb, rugged chips that basically never fail. They follow the lowest level, most basic instructions (such as turn on and turn off), and report them to Scada (?I just turned on.?). If something is wrong, the PLC says, ?Help me? in the form of an alarm. The alarm sounds at the water site and at the Scada operations centers. The alarm also flashes on the computers, and it can?t be shut off until a formal acknowledgement of the alarm is made and physically logged by a human being.Every month, about 1,700 samples of the water are tested for unusual characteristics. ?Rolling crews? periodically go to MWRA pump stations and storage sites, and check the integrity of the facilities, and the electronics at the facilities such as the chlorine monitoring devices. Most of the water facilities are under surveillance and, currently, under the watch of the National Guard.
But suppose a hacker got by all this and, through the use of a computer either at the operations facility or remotely, planted a Trojan horse that at some point ordered the system to dump too many chemicals in the water.
That water, chlorinated, leaves the reservoir and enters the pipes, where it will receive PhpH adjustment and fluoridation.
Scada receives data about the water 10 minutes after it enters the pipes. It?s checking for wild fluctuations in chlorine levels, which would indicate a reaction with some bacteria or foreign agent. There are several more chlorine checkpoints, at two hours downstream, three hours, and so forth. If the Wachusett Reservoir were in one endzone of a football field and your faucet were in the other, your water would be checked at its own one yard line, its 20, its 40, your 40, and then it would be stored at another facility at your 20 yard line and tested there too. It also receives a goal-line chlorine treatment as an extra safety measure. It would take your water anywhere from 12 hours to three days to go endzone to endzone.
If, after all of this, toxic water made it to faucets because of a computer hack, and people got ill, the MWRA would convene in its war room, and proceed with a detailed emergency incident response plan that includes shutting down pumping facilities, and sending out emergency broadcasts, among other steps. | <urn:uuid:05ba95a9-4b4d-44aa-b1c8-41501ff84af1> | CC-MAIN-2022-40 | https://www.cio.com/article/270790/security0-debunking-the-threat-to-water-utilities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00178.warc.gz | en | 0.955558 | 1,203 | 3.015625 | 3 |
Tuesday, September 27, 2022
Published 6 Months Ago on Sunday, Mar 20 2022 By Adnan Kayyali
Semiconductors power the modern world. They are tiny chips that hold the key to components in every electronic device we use today. Fridges, washing machines, laptops, cellphones, cars, and even military equipment and countrywide power grids are powered by a bit of chip no bigger than a toddler’s fingernail.
Even the machines that produce semiconductors need semiconductors to function.
Needless to say, a global shortage of semiconductors is a big deal, and it goes beyond the lack of PS5 on the market.
If technological transformation and digitization are the bloodstreams that nourish the world, then semiconductors are the heart that pumps them. Anything that needs electricity to pass through in order to work needs a semiconductor inside it.
Some companies like Intel, Hitachi, and IBM design and manufacture their own semiconductor chips. These companies are referred to as Integrated Device Manufacturers (IDMs).
However, the vast majority of companies have adopted a method of outsourcing the manufacturing of their semiconductors.
Micro, Sony, Apple, QUALCOMM, AMD, and Nvidia all design their own chips but outsource their manufacturing to a single company, Taiwan Semiconductor Manufacturing Company (TSMC). Given today’s political climate, this puts the shortage of semiconductors at the center of global affairs.
Former U.S. President Donald Trump’s trade war with China has further strained supply lines that were already stretched thin. Political tensions between China and Taiwan have also put tech giants and global governments on edge, now seeking to diversify their supply or at least develop alternates in case of supply chain disruptions.
This means that both China and the U.S will have to bring chip manufacturing closer to home. Still, until then, the semiconductor industry may be vulnerable to geopolitical power plays and supply chain disturbances.
Having most of the world’s semiconductors coming from a tiny island off the coast of China, Taiwan puts that island at the center of a global struggle, whether they like it or not.
Blame the Covid-19 pandemic for the crisis escalation. When people were forced to stay home, they were also forced to kit out their homes with all kinds of new and improved tech. People went to work, and kids on their laptops and phones went to school. They entertained themselves with movies and videogames. They completely upgraded their homes with all types of new tech to accommodate their unexplored home lifestyle, and this only accentuated the semiconductor problem.
Businesses were moving online and needed more cloud infrastructure. While this was going on, the overall demand for chips was only increasing. The automobile industry is at the top of the list as modern cars need a greater number of chips to run their complex computer systems.
Some analysts expect the shortage of semiconductors to persist well into the 2020s, or at least until 2023. What is certain, however, is that demand will only increase. IoT products will continue to see widescale adoption; 5G cell towers and 5G enabled cell phones will be hitting the market.
The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:b909e973-82e3-4e74-b593-209234c8c50d> | CC-MAIN-2022-40 | https://insidetelecom.com/the-shortage-of-semiconductors-in-a-nutshell/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00178.warc.gz | en | 0.959565 | 782 | 2.53125 | 3 |
Simulate a Boolean Data Type in a Database Table
October 1, 2008 Hey, Mike
I want to create a column in an SQL table and only allow two values. They could be 1 or 0, Y or N, or whatever. I don’t want the database to allow any other values into the column. I want the default to be the “false” value. How would you define such a column in SQL? I understand that Oracle supports a BOOLEAN data type, but I don’t think such exists on the AS/400.
You are correct, SQL Server has the BIT data type and Oracle has the BOOLEAN data type but there is no equivalent data type in DB2 for i. However, you can use a CHECK constraint to simulate a BOOLEAN type. Here’s an example:
Create Table TestBoolean (Bool Numeric(1,0) Not Null Default 0 Constraint BoolCheck Check (Bool In (0,1)))
In this case, the CHECK constraint mandates the value of column Bool to contain a 0 or 1 (identical to the behavior of SQL Server’s BIT data type). Of course DEFAULT will give your column a default value of 0 if one isn’t explicitly given. Keep in mind that a constraint name must be unique within a schema. Therefore you won’t be able to specify BoolCheck as a constraint name on more than one table within the same schema (library).
You can also emulate a BOOLEAN value with a character field that contains the values ‘Y’ and ‘N’, such as:
Create Table TestBoolean (Bool Char(1) CCSID 37 Not Null Default 'N' Constraint BoolCheck Check (Bool In ('Y', 'N')))
Now, when you insert or update a value into this column that is not allowed by the check constraint you get the following error:
SQL0545 INSERT or UPDATE not allowed by CHECK constraint.
If you have multiple fields with the same CHECK requirements, you can implement the validation for all three columns within one constraint:
Create Table TestBoolean (Bool1 Numeric(1,0) Not Null Default 0, Bool2 Numeric(1,0) Not Null Default 0, Bool3 Numeric(1,0) Not Null Default 0, Constraint CheckFlags Check(Bool1 In (1,0) And Bool2 In (1,0) And Bool3 In (1,0)))
Of course you can vary this example to allow NULLs if required.
Finally, for compatibility with other database servers, one other possible variation is to create a distinct type called BIT. (You can’t use the name BOOLEAN because it is an SQL reserved keyword–maybe IBM intends to give us a BOOLEAN data type in the future.) However, when implementing the distinct type you will still need to specify the CHECK constraint at the table level to enforce the allowed values. Also, distinct types require more processing overhead. See the documentation for the CREATE TYPE command for more info.
Michael Sansoterra is a programmer/analyst for i3 Business Solutions, an IT services firm based in Grand Rapids, Michigan. Send your questions or comments for Michael to Ted Holt via the IT Jungle Contact page. | <urn:uuid:57e950dc-f69b-4aaa-a9b8-7ea0df44dcb5> | CC-MAIN-2022-40 | https://www.itjungle.com/2008/10/01/fhg100108-story02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00178.warc.gz | en | 0.759345 | 730 | 2.875 | 3 |
Along several tried and tested attack surfaces that have been gaining traction, cyber-criminals are going strong with newer forms of attack that may seem alarming. A report by cybersecurity firm Webroot points out that nearly 40 percent of malicious URLs were found on good domains. According to the study, legitimate websites have been frequently compromised to host malicious content.
The report also pointed out that home user devices are more than twice as likely to get infected as business devices with nearly sixty-eight percent infections in consumer devices than 32 percent in a business endpoint. According to it, phishing attacks increased 36 percent, with the number of phishing sites growing 220 percent over the course of 2018.
The report suggested, while ransomware was less of a problem in 2018, it became more targeted. “We expect major commodity ransomware to decline further in 2019; however, new ransomware families will emerge as malware authors turn to more targeted attacks, and companies will still fall victim to ransomware. Many ransomware attacks in 2018 used the Remote Desktop Protocol (RDP) as an attack vector, leveraging tools such as Shodan to scan for systems with inadequate RDP settings. These unsecured RDP connections may be used to gain access to a given system and browse all its data as well as shared drives, providing criminals enough intel to decide whether to deploy ransomware or some other type of malware,” read a release.
Other key findings including the method in which malware tried to install itself. According to the report, nearly a third of malware tried to install itself in %appdata% folders which was at 29 percent, among others were %temp% at 24.5 percent, and %cache% at 17.5 percent. These locations were the most common hiding paths used by malware.
The report also pointed out that devices that use Windows 10 are at least twice as secure as those running Windows 7 and that despite the decrease in cryptocurrency prices, cryptomining and cryptojacking are on the rise.
“We wax poetic about innovation in the cybersecurity field, but you only have to take one look at the stats in this year’s report to know that the true innovators are the cybercriminals. They continue to find new ways to combine attack methods or compromise new and existing vectors for maximum results. My call to businesses today is to be aware, assess your risk, create a layered approach that protects multiple threat vectors and, above all, train your users to be an asset—not a weak link—in your cybersecurity program,” Hal Lonas, CTO, Webroot. | <urn:uuid:fbc36895-3603-4773-8538-e5c04289524a> | CC-MAIN-2022-40 | https://cisomag.com/forty-percent-of-malicious-urls-found-on-good-domains-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00178.warc.gz | en | 0.968833 | 520 | 2.53125 | 3 |
The Bitcoin blockchain network is composed of nodes and nodes are basically computers connected to the internet, running the Bitcoin or other blockchain software. The Bitcoin network is a peer-to-peer network: all nodes are homogeneous. Nodes receive transactions and blocks from other nodes and relay these transactions and blocks to other nodes. Each node (not all common nodes) keeps a full copy of the blockchain. Nodes are important in order to keep the blockchain functioning and can’t be avoided. There are basically three types of nodes called simply nodes, Full nodes, and master nodes.
Type of Nodes
Node: A computer that operates on the blockchain network which is able to send and receive transactions (in bitcoin wallet)
Full node: A client that operates on the network and maintains a full copy of the blockchain. Sends and receives TX as well, updates the blockchain with block entries and confirmations from miners.
Master nodes: A client that does all of the above and also enables/performs additional functions. Gets paid a portion of the block reward. In other words, Master nodes are dedicated servers on the internet that enable instant transactions and perform the trustless anonymization of users’ funds. Master nodes require 1,000 Bitcoin, a secured server, a full-time Internet connection, and periodic updates. In return, they receive 45% of the block reward, for instance on Dash platform, which at current rates and a number of nodes amounts to 1.8 Dash every 6-7 days. When a block is mined, 45% reward goes to the miner, 45% goes to the master nodes, and 10% is reserved for the budget system.
Master nodes enable decentralized governance and budgeting. In summary, aside from a full copy of the blockchain, a node also keeps additional data structures, such as the unspent transaction outputs cache or the unconfirmed transactions’ memory pool, so that it can quickly validate new received transactions and mined blocks. If the received transaction or block is valid, the Master node updates its data structures and relays it to the connected nodes. It is important to note that a master node does not need to trust other nodes because it validates independently all the information it receives from them.
When a miner finds a new block, it broadcasts it to the network. All receiving master nodes first check the validity of the block, i.e. that it solves the partial hash inversion problem with the required difficulty. They then update their internal data structures to reflect the new information contained in the block:
- Update the unspent transaction outputs cache (UTXO)
- Update the unconfirmed transactions’ memory pool. This involves going through the list of transactions and dropping those that are in conflict with (spend the same outputs) as a transaction in the new mined block
Master nodes maintain a number of connections to other nodes in the network. Some master nodes will like to keep as many connections open to other full nodes/nodes as the available resources (CPU, network bandwidth) allow, usually in the upper hundredths. For instance, a master node might want to keep connections to many other nodes, as geographically distributed as possible, to quickly detect and act upon double-spending attempts. Similarly, a mining node might want to have as many connections open as possible, so that it receives prompt notice of mined blocks. A faster reception of new mined blocks minimizes the time wasted trying to mine a block that will become orphan. For other nodes, having up-to-the-second information is not so important, and so they usually connect to only a handful of nodes.
Master nodes can provide a possible solution to verify user identity in DApps; that is, master nodes can democratically select a node to verify user identity. The person or business behind this node can manually verify user documents. A part of this reward can also go to this node. If the node doesn’t provide good service, then the master node can vote for a different node. This can be a fine solution to the decentralized identity issue. | <urn:uuid:8eece978-8eae-43c6-b2c5-c6742b97188a> | CC-MAIN-2022-40 | https://www.1kosmos.com/blockchain/what-are-master-nodes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00178.warc.gz | en | 0.932325 | 829 | 3.21875 | 3 |
Simple network Management Protocol(SNMP) runs UDP port 161 and 162 and is a widely deployed protocol used to monitor and Manage network Devices: to obtain information on and even configure various network devices remotely. It runs on any network device from hubs to routers and network printers to servers. SNMP clients also run in many workstations and Personal Computers. SNMP is also used in most of the network management packages for information gathering.
Thought type and amount of data that can be accessed via SNMP depends upon the device on which it runs, it generally provides details of the hardware and OS type, network interfaces , network protocols statistics, vendor-specific details- like model number and about device functionality. Many devices can be remotely configured using SNMP.
All of the above feature makes it easier for network admin to manage and monitor network but same raises a huge security risk since some or all of the above task could also be done by an attacker if SNMP is compromised.
Attackers can probe the devices to provide snmp information (snmpwalk) and overwhelm the victim’s device with massive traffic after spoofing its source IP or completely reconfigure them and cause service interruption.
Open SNMP vulnerability exist mainly due to the fact that it is enabled by default with community strings: “private” for write/ management access and “Public” for read access in devices that don’t even require it and the administrators are not even aware of its existence.
To configure or disable SNMP, it is recommended to consult the product documentation since it runs on a variety of systems and configuration in each depends.
If left unprotected such network devices or computers can be easily used to abuse other networks on the Internet and your network resources will be involved in organizing such malicious activities. Also unprotected SNMP service can leak sensitive technical information from the vulnerable device.
- General security practice is to disable any services or applications that is not required, thus simple step is to disable SNMP in all the devices that doesn’t require it.
- Upgrade to SNMPv3 which employs better encryption.
- Apply ingress filtering: configure firewall to block UDP ports 161 and 162 and any other custom-configured port for SNMP traffic to the outside world. If you have some public servers: allow inbound traffic from internet to only those servers. If all of above is not possible , at least monitor activity on all ports utilizing SNMP.
- Apply egress filtering to block servers from initiating outbound traffic to internet, since there is hardly a need for it.
- To reduce risk from internal attack by applying filter to allow SNMP request from only authorized devices.
- Change Default Community String :Community string acts as password for SNMP communication thus it is recommended to set complex community string.
- Create a separate management network for SNMP traffic if it is not possible to block or disable it, it would make the hacking process difficult.
- Some devices will allow you to restrict SNMP access.If available, it is recommended that you configure which hosts can send SNMP write command, and possibly which hosts can get information.
- Limit SNMP access to only those device that require snmp for monitoring.
- Getif and SNMPUTIL are some of the snmp enumeration tools. | <urn:uuid:6bae76e0-3dc5-45a8-bb61-33bcf016ce4a> | CC-MAIN-2022-40 | https://www.btcirt.bt/snmp-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00178.warc.gz | en | 0.88321 | 902 | 2.921875 | 3 |
The cigarette butts mainly considered as pollutants in atmosphere. The researchers have discovered a major use in used cigarette butts.
Across the approximately more than 800,000 metric tons of cigarette butts generated. At the University of Nottingham the chemists have found that carbons in used cigarette butts possess high hydrogen storage capacity and unknown surface area.
The research by Robert Mokaya, Professor of Materials Chemistry, he said “We have utilized cigarette butt waste as starting material to prepare energy materials that offer unprecedented hydrogen storage properties. This may not only address an intractable environmental pollution problem cigarette butts. But also offers new insights into converting a major waste product into very attractive hydrogen storage materials.”
Cigarette butts produce a carbon product known as Hydrochar
Used cigarette butts contain cellulose acetate a non-biodegradable compound. However, the cellulose acetate makes valorization to porous carbons. Valorization to move away from coal-based carbonaceous precursors to biomass-derived or waste-based starting materials for porous carbon synthesis.
The cigarette butts produce a carbon product known as Hydrochar. The carbon product gained involving heat and water by the process known as Hydrothermal Carbonization. During activation of Hydrochar it produces oxygen rich porous carbons with high surface area.
Professor Mokaya said: “We show that activated carbons derived from cigarette butts or filters, via sequential benign hydrothermal carbonization and activation, are super porous with ultra-high surface area and exhibit high hydrogen storage capacity. This raises the question of whether valorization can solve the intractable cigarette butt problem. But also offers porous carbons that attain new levels of hydrogen storage for porous materials in general.”
Further, research ongoing for sustainable carbons with properties as hydrogen storage including energy materials. | <urn:uuid:f5515dab-4b03-4969-83db-2cb06ade0411> | CC-MAIN-2022-40 | https://areflect.com/2017/11/03/used-cigarette-butts-possess-high-hydrogen-storage-capacity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00378.warc.gz | en | 0.920351 | 380 | 3.15625 | 3 |
With so many avenues open for cybercriminals, it is more important than ever to keep your gadgets and personal details safe. Even the smallest things that we do on our devices can potentially be exploited.
A recent example was a critical bug discovered in Google Chrome, one of the most popular browsers worldwide. A hacking group used it to attack security professionals. Tap or click here to find out how to avoid a hackers’ favorite security flaw.
There are tons of cybersecurity threats to contend with, so how do you know your device is locked down tight? There is an app for that. Let’s look at how this app can help keep your phone secure and show you what to watch for.
SAFE Me, safe you
Android devices are prone to be more vulnerable to intrusions, but that doesn’t mean Apple products are immune. That is why the free SAFE Me app is available on both platforms. It’s not antivirus software but rather a nifty app that scans your phone settings and alerts you to potential problems.
Billing itself as a “comprehensive Cyber Risk Quantification platform,” developer Safe Securities Inc. made the app available for free as a tool for “learning, assessing and improving cybersecurity awareness.”
Quite plainly, the app will better equip you to recognize potential security issues. Follow its instructions, and you can fix problems by adjusting settings to make your gadget safer.
How the app works
SAFE Me’s functionality consists of two parts: awareness and settings. The app will teach you the fundamentals of cybersecurity and check certain settings on your device to make sure it’s secure.
The awareness portion aims to teach you the concepts of cybersecurity through a series of microlearning courses. These include short informational videos and question-based assessments. After completion, the app will score you on your cybersecurity awareness level.
The app’s microlearning video courses raise your awareness of cybersecurity threats. Options include:
- SAFE mobile device usage
- SAFE laptop usage
- SAFE password usage
- SAFE social media usage
- Importance of software updates
- Email scams
- Cyrptojacking schemes
- Security tips for safely using messenger apps
- Benefits of two-factor authentication
Those are just a few of the video tutorials offered by SAFE Me. There are several more that you can watch to increase your level of cybersecurity awareness.
Your cybersecurity score
To determine your overall cybersecurity score, the app tests your device’s security settings, passwords and knowledge. It factors in things like account passwords, how long you leave social media apps open and if your system is updated. Scores range from 0-5. The higher your score, the fewer risks you face.
3 factors that your score is based on:
- Your device’s security settings – These include automatic screen locking, device encryption and more.
- Exposure on the Dark Web – If your email address or other account credentials have appeared on the Dark Web, you’re more likely to face future security threats.
- Cybersecurity awareness – This is based on the app’s microlearning courses and question-based assessments that you take.
Check your settings
Turn to SAFE Me’s security control tools to check your device’s settings. It evaluates security settings and makes recommendations based on its findings. Safe Security doesn’t go into much detail as to what it protects you from. It just states recommendations are “to ensure protection from various types of cyberattacks.”
When an issue is found, you can tap on the alert to bring up an explanation of why it’s important to make changes. Once the setting is updated, mark the change as complete.
A couple of settings you may be asked to adjust include your device’s automatic screen locking time and lock screen notifications. You might be wondering how in the world automatic screen locking time affects security?
It’s pretty obvious if you think about it. The longer your screen is unlocked, the more time someone has to snoop. It’s recommended that you set your device’s automatic screen locking time to one minute.
The app will even check to see if your email address has been exposed on the Dark Web. If it has, you must change your password ASAP. Remember, never use the same password for multiple accounts. Tap or click here for five new rules to create the best passwords. | <urn:uuid:ddeec629-22af-4e1c-a5af-cc409893c241> | CC-MAIN-2022-40 | https://www.komando.com/safety-security-reviews/safe-me-security-checkup-app/780859/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00378.warc.gz | en | 0.926978 | 923 | 2.515625 | 3 |
Check fraud is a booming business. Estimated losses to banks are in the billions of dollars. In less technological times, the main method of check fraud was simple forgery—essentially, the falsification of a signature on somebody else’s bank account.
Today, check fraud is more complex. Check-fraud artists are as technologically adept as any corporate computer scientist. They are comfortable with personal computers, scanners, color printers, publishing software, and chemistry, as well as with the clearing processes of the banking industry. They can take a blank check, make multiple copies that can’t be distinguished from the original, and use them to obtain cash. Or they can steal or buy an employee’s payroll check and chemically alter it, changing the payee name, amount, and so on. Luckily, there are means available to foil most check fraud attempts: magnetic ink character recognition (MICR) laser check printing and the various elements of an MICR laser-check-processing solution that are associated with it.
There are three key elements involved in check-fraud defense: check-processing software, MICR laser printers, and blank safety check stock. The software stores check forms electronically. When a check run is required, it accepts data from financial-management software systems, formats it by merging it with the electronic form, and spools it out for printing on the MICR laser printer.
Instead of using preprinted checks, which are highly vulnerable to theft and alteration, the complete, signed checks are generated on blank sheets of safety paper that usually incorporates multiple features that resist duplication or chemical alteration. The checks are printed with MICR toner, which contains iron oxide that is magnetized and read by a specialized check reader. The printed checks include company information, logos, graphics, payee information, MICR lines (bank transfer codes, customer account number, etc.), and a signature. All of this information is held securely in the software or is housed in a secure PCMCIA card that can be inserted into a card reader option on the MICR printer.
Development of MICR
By today’s standards, business methods were primitive and mainly manual prior to World War II. The state-of-the-art business machine was the typewriter. Checks were mainly business instruments, not used universally by consumers as they are today.
After the war, business began to boom, with corresponding increases in payment volumes, and consumers also began to recognize the convenience of personal checks. By 1952, the estimated annual volume of checks stood at eight billion. Banks began to realize that their manual check-sorting procedures were no longer adequate. An appreciation for technology was in the air, and it was to technology that the industry turned.
Bank of America technologist Alfred Zipf had joined Bank of America in the mid- 1930s as a transit clerk—a check reader/sorter—and, by the early 1950s, he had become the bank’s director of equipment research. In that position, he conceived and installed the first large-scale general-purpose computing system in the banking industry, and he was its director of operations.
He was also leader of a Stanford Research Institute team that was developing MICR technology. MICR technology enables you to encode and read data in characters in which the ink or laser-printer toner has been infused with iron oxide. In the clearing process, the characters are first magnetized and then read by the reader/sorter device. The MICR characters are known as the E13B MICR font, and every check written in the United States and Canada is processed by means of the MICR coding on its face. European countries have their own MICR encoding protocol, which is called CMC7 and follows essentially the same reading/sorting procedure. A magnetic ink character reader was also developed along with MICR: the Electronic Recording Machine Accounting (ERMA) system. The sorter reduced banks’ check sorting time by 80 percent.
MICR check processing continued pretty much without change for about 30 years. Banks provided consumers with MICR-imprinted checkbooks, and electronic data processing became the norm in companies. Businesses had their checks preprinted with company information and MICR lines on continuous forms that could run through their impact printers.
By the early 1980s, the PC had arrived, followed closely by laser-printer technology from such companies as Xerox, Hewlett-Packard, and Apple. PCs opened the technology door to just about everyone, and software soon proliferated, with applications for every conceivable need, even desktop publishing. Scanning devices made it possible to input not only words but also images.
From this combination of technology emerged a whole new industry: large-scale check fraud. Now, with the technology used in business widely available to the bad guys at low cost, check-fraud artists were able to take a single check, alter it or reproduce it multiple times, and score big hits on company treasuries.
Enter MICR laser check printing. Forward-looking thinkers looked at MICR technology, laser-printer technology, and check fraud and found a connection. The idea was this: Don’t use risky preprinted forms. Preprinted forms can be misappropriated and forged.
The MICR Font
Besides the numbers 0 through 9, the E13B MICR font includes some specialized characters, each of which is actually two symbols, serving as brackets as well as signifiers for functional sets of numbers:
• The Auxiliary On-Us symbol indicates that the sorting criterion is determined by the organization that is doing the sorting on the field—usually the originating bank. It is an optional field and is used only on business checks.
• The Transit symbols and the numbers between them define the instructions on how to clear the check. They indicate such factors as the Federal Reserve district from which the
check should be cleared, the federal bank or branch in the area where the check was drawn, and the bank number.
• The On-Us symbols enclose the account number and sometimes the bank’s branch number and the check number. The format can be defined by the individual bank.
• The Amount symbols enclose the field that contains the amount of the check. It is MICR- encoded by the bank of first deposit.
Enhancing the Printer for MICR
What makes a MICR laser printer different from a conventional laser printer? The engineering required to MICR-enhance a network laser printer does not alter or modify the actual printer technology. Rather, it is a five-step process that vendors of MICR solutions perform in close collaboration with the engineers who designed the printer. The process ensures that the printers continue to function well in the more stressful MICR check- processing task and that they provide the critical security required for check and sensitive- document processing:
1. MICR toner is like other toner, but the iron oxide it contains makes it more abrasive. Durability and strength testing ensure that the printer engine can handle the tougher toner.
2. A unique E13B MICR font (United States and Canada) must be developed specifically for each model of printer. Each character must produce American National Standards Institute (ANSI) specification signal strength and positioning. While font characters may look the same on checks, viewed under magnification or by a MICR reader/sorter or tester, character and point sizes appear to be different from printer to printer.
3. MICR laser printers are used both in limited-access, totally secure environments and in open environments where one printer performs both MICR and conventional printing. Multiple security features must be developed for either situation.
Toner sensors are one essential security element of MICR enhancement. MICR-only printers, normally housed in a secure environment, are equipped with a single sensor that prevents operation if a conventional toner cartridge is inserted. Printers that handle both MICR and non-MICR printing include a second, non-MICR sensor and a three-position key-lock switch that enables conventional printing.
Font cartridge security is the other major element, and this is maintained by digitizing not only the font but also the customer’s corporate information, logo, MICR line(s), and signatures and placing the information on a removable PCMCIA card. Inserted in the printer, the information is placed in memory when the check stream enters the printer. Some types of printers have no provision for removable media, so some vendors have developed an external PCMCIA card reader that attaches to the printer’s parallel port. At the conclusion of the print run, the PCMCIA card is removed and stored securely.
4. The printer’s fuser temperature is critical to the MICR toner’s adherence to the check stock. Reengineering a laser printer for MICR includes testing and adjustment until toner- fuser temperature performance satisfies specific adherence test criteria.
5. Once the MICR enhancement has been completed, the MICR solutions provider tests the printer in various ways. In one stress test, for example, 3,500 checks are run through a reader/sorter 20 times, analyzing the quality of the printing before and after each run. This analysis examines formatting, character spacing, and any other problems that might be
identified on checks rejected by the reader/sorter. ANSI specifications require a reject rate of less than 1 percent, and some vendors are even more stringent.
There are several other tests:
• Software compatibility testing to ensure that check processing continues to work with all financial management software
• Printer driver testing
• Paper path and stock testing, including weights, bonds, and paper sizes
• Accessory testing, such as high-capacity feeders, duplex units, envelope feeders, and output devices
• Customer-requested devices and features
• Preliminary testing and recommendations on special requirements proposed by customers
• Beta testing with selected customers, often on a day-to-day basis, to determine the viability of the product under actual MICR laser check production
MICR toner has regular toner as its base, but iron oxide and other ingredients, such as charge agents and resins, are added to make it conform to the standards established by the ANSI and the American Bankers Association. Because each laser printer engine is unique to its model, MICR toner must be formulated specifically to work with each model of printer in order to ensure that the signal strength is not only correct but also retains its readability as it moves through the various clearing procedures. Moreover, it must have strong adhesive characteristics because a common fraud practice involves lifting characters off the check with adhesive tape and replacing them. Ideally, MICR toner is used with printers that have been enhanced for secure check processing, but there are toner kits available to enable MICR check processing on non-MICR laser printers.
By far, the greatest amount of check alteration is based on a bleaching process to remove the printing on checks. Treating paper so that it is difficult to remove or change the toner ink from laser-printed documents is a standard feature. Additional standard features include Brownstain, which causes a brown stain to appear when activated by a chlorine-based eradicator; fluorescent fibers, which appear only under ultraviolet light; and a diagonally positioned artificial watermark, which differentiates the new stock from the predecessor product, which had horizontal watermarks.
Check Protect stock features a chemical stain, which causes a multilingual “VOID” (in English, Spanish, or French) to appear when chlorine bleach-activated. It also carries the diagonal watermark.
Connecting Laser Printers to the AS/400 Environment
IBM AS/400 users converting from conventional MICR check processing (with impact printers and preprinted forms) to MICR laser-check-printing solutions are confronted with a communications technology dilemma: Should they go directly to the printer, or should they access the printer through a local area PC network server?
There is a compatibility problem in connecting the AS/400 computer directly to a laser printer. The problem arises because the EBCDIC data streams produced by AS/400
computers are mostly intended for output to online printers. These printers communicate with the computers over twinax cable, as do their computer terminals. Laser printers do not do EBCDIC or twinax. They do ASCII, communicating via Ethernet, Token-Ring, or other PC network protocols using parallel cables.
Historically, bridging the AS/400 environment over to the PC environment and making MICR laser check printing possible has required laser printer controllers. These are small boxes with a twinax or Ethernet interface on one end and a parallel interface to a parallel printer cable on the other. Between the two interfaces, the IBM code is changed into the ASCII code that laser printers understand.
While protocol converters are still in widespread use, newer technologies have begun to replace them. For example, newer releases of OS/400 have built-in networking capabilities, which enable the machines to spool data streams directly to Ethernet-connected devices. It is estimated that some 60 percent of AS/400 shops now connect to laser printers with IP addresses on Ethernet networks and that perhaps another 20 percent are Ethernet- enabled.
Another alternative to twinax is IBM’s Client Access, which allows AS/400 sessions to be performed on a PC. In check-processing operations, users can activate the check run and print the checks directly to the printer from the PC.
What is the best solution? Often, it is the one that is easiest to implement and fits best into the overall IS topography.
MICR in Three Easy Lessons
There are three important lessons in any discussion of MICR laser check processing. The first, which impacts the financial side of the house, is that MICR laser check processing is a systems approach to the pervasive problem of corporate check fraud, combining a number of elements to produce a highly effective solution. The second, which directly impacts the IS department, is that it is a highly efficient, secure solution that can, if desired, transfer full control over the process to the payment-originating departments. The third lesson is that it is relatively easy to implement. Check forms and accounts are implemented in software using pilot or existing forms as a basis, and a tape is provided that can be read directly into the AS/400. From there, it is a virtually automatic process. | <urn:uuid:e72304ff-b5bf-4ca4-baca-a07691b09836> | CC-MAIN-2022-40 | https://www.mcpressonline.com/analytics-cognitive/document-management/how-micr-laser-printers-protect-company-funds | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00378.warc.gz | en | 0.936179 | 2,985 | 2.921875 | 3 |
Researchers from Tencent, along with other Chinese scientists, are using deep learning to predict critical COVID-19 cases.
Scientists around the world are doing incredible work to increase our understanding of COVID-19. Thanks to their findings, existing medications have been discovered to increase the likelihood of surviving the virus.
Unfortunately, there are still fatalities. People with weakened immune systems or underlying conditions are most at risk, but it’s a dangerous myth that the young and otherwise healthy can’t die from this virus.
According to a paper published in science journal Nature, around 6.5 percent of COVID-19 cases have a “worrying trend of sudden progression to critical illness”. Of those cases, there’s a mortality rate of 49 percent.
In the aforementioned paper, the researchers wrote: “Since early intervention is associated with improved prognosis, the ability to identify patients that are most at risk of developing severe disease upon admission will ensure that these patients receive appropriate care as soon as possible.”
While most countries appear to be reaching the end of the first wave of COVID-19, the possibility of a second threatens. Many experts forecast another wave will hit during the winter months; when hospitals already struggle from seasonal viruses.
One of the biggest challenges with COVID-19 is triaging patients to decide who are most at risk and require more resources allocated to their care. During the peak of the outbreak in Italy, doctors reported reaching a point of having to make heartbreaking decisions over whether it was a waste of limited resources even trying to save someone.
A team led by China’s senior medical advisor on COVID-19, Zhong Nanshan, was established in February. The team consisted of researchers from Tencent AI Lab in addition to Chinese public health scientists.
Nanshan’s team set out to build a deep learning-based system which can predict whether a patient is likely to become a critical case. Such information would be invaluable to ensuring the patient gets early intervention to improve their chances of surviving the virus in addition to supporting medical staff with their triaging decisions.
The deep learning model was trained on data from 1590 patients from 575 medical centers across China, with further validation from 1393 patients.
Tencent has made the COVID-19 tool for predicting critical COVID-19 cases available online here (Please note the small print which currently says “this tool is for research purpose and not approved for clinical use.”)
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam. | <urn:uuid:5d8c6ce7-434d-4a50-a976-2a3f14dc7980> | CC-MAIN-2022-40 | https://www.artificialintelligence-news.com/2020/07/23/deep-learning-predict-critical-covid-19-cases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00378.warc.gz | en | 0.947516 | 569 | 3.09375 | 3 |
A coalition of computer firms led by Google and Intel have undertaken yet another initiative to reduce PC power consumption and cut down on greenhouse gases.
The Climate Savers Computing Initiative brings together quite an assortment of participants: Google (Quote), Intel (Quote), Dell (Quote), EDS, the Environmental Protection Agency, HP (Quote), IBM (Quote), Lenovo, Pacific Gas & Electric, Microsoft (Quote), the World Wildlife Fund, and more than 20 other companies.
Google has put its vast sums of money where its mouth is. It has installed a massive solar panel installation at its Mountain View headquarters and uses fuel-efficient vehicles to shuttle employees around the campus. Now it wants to lead the charge for other aspects of computing.
The initiative calls for improving the power efficiency of both desktop and server computers. A typical desktop PC wastes over half the power delivered to it, according to a blog posting by Bill Weihl, Google’s Green Energy Czar.
With more efficient power supplies and DC-to-DC converters, and power-management features turned on, that same desktop PC would save as much as 80 percent of the energy it currently consumes, he claimed.
The initiative’s energy efficiency benchmarks will initially follow the EPA’s Energy Star guidelines but eventually exceed them. The 2007 Energy Star specifications require that PC power supplies meet at least 80 percent minimum efficiency. The Climate Savers initiative would require a minimum of 90 percent power efficiency by 2010. | <urn:uuid:210fd48a-b4c1-4a47-aa76-1d27ab2aecb2> | CC-MAIN-2022-40 | https://www.datamation.com/applications/google-intel-lead-latest-green-charge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00378.warc.gz | en | 0.913366 | 299 | 2.515625 | 3 |
Despite Bill Gates predicting the demise of passwords back in 2004, they are still very much in use. Passwords, like email, seem future proof; but they are also the source of many cybersecurity problems. Key drivers of these issues are human behavior and the desire for convenience, which results in password reuse across multiple accounts.
The 2018 Global Password Security Report shows a staggering 50 percent of users use the same passwords for their personal and work accounts. A 2019 online security survey by Google identified that 65 percent of people use the same password for multiple or all accounts. These statistics validate the magnitude of the password reuse problem and organizations need to take action to mitigate the accompanying risk.
In the first six months of 2019, data breaches exposed 4.1 billion records and, according to the 2018 Verizon Data Breach Incident Report, compromised passwords are responsible for 81% of hacking-related breaches. The latest data from Akamai states that businesses are losing $4m on average each year due to credential stuffing attacks, which are executed by using leaked and exposed passwords and credentials. Organizations can’t afford to ignore this growing problem and need to take steps to mitigate the risks from poor password hygiene.
Humans are at the center of the password reuse problem
Password reuse is an understandable human behavior, but organizations need to make good password hygiene a priority to ensure that passwords are not a weak link in their security posture. Every user, system, application, service, router, switch, and IP camera should have a unique, strong password.
There are three key steps that organizations should take to strengthen their defenses:
1. Prevent the use of weak, similar or old passwords
Make sure users select strong passwords that are not vulnerable to any dictionary attack. It’s critical that new passwords are significantly different from the last one and that you prohibit too many consecutive identical characters. You should also prevent the reuse of old passwords. Fuzzy-matching is a crucial tool for detecting the use of “bad” password patterns, as it checks for multiple variants of the password (upper-lower-case variants, reversed passwords, etc.)
2. End mandatory password resets: They don’t improve security
Organizations have historically addressed the threat from compromised passwords by enforcing password resets. However, this policy has proven to be ineffective as it does nothing to ensure that the new password is strong and has not already been exposed. It can also drive up operational costs and have a negative impact on employee and user productivity. Microsoft and NIST guidelines advise against this approach.
3. Check credentials continuously
NIST advises companies to verify that passwords are not compromised before they are activated and check their status on an ongoing basis. As the number of compromised credentials expands continuously, checking passwords against a dynamic database rather than a static list is critical. If a compromise is detected, it’s vital to perform a password reset or prompt users to create a new password the next time they login.
Passwords are here to stay and organizations need to rethink their password-hardening strategy as we move into the next decade. They need to stop looking at it as a compliance task and start looking at it as a layer of protection. By adhering to the recommendations outlined above, organizations can reduce the risks from poor password hygiene, including password reuse. | <urn:uuid:56389d4f-b5dd-4f8e-b05d-1970115af96c> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2019/11/12/password-reuse-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00378.warc.gz | en | 0.941931 | 676 | 2.78125 | 3 |
Just days before Christmas, a rare event occurred: the report of a successful intrusion into America’s infrastructure by overseas hackers.
Although the event — penetration of the control system of a dam 20 miles from New York City — was more than two years old, it just made it into the public light last month.
Cloaking such incidents in secrecy is standard operating procedure for industries that use control systems — systems used to control the power grid, factories, pipelines, bridges and dams.
“We have seen cyberincidents that are not disclosed because companies are worried about the damage they could do to their brand name,” said Barak Perelman, CEO ofIndegy.
Air Gap Myth
Many companies providing infrastructure services believe they can keep their control systems safe from attack by “air gapping” them, or keeping them segregated from the public Internet, he told TechNewsWorld.
“I can tell you that 50 percent of the facilities we visit say they’re air-gapped, but zero percent are actually air-gapped,” Perelman said.
“There’s always some connection to the Internet,” he said. “There’s always that technician who doesn’t want to drive to a facility at all hours of the night for emergencies and plugs in a modem so he can connect from home.”
The air-gap myth creates a false sense of security, and that leads to inadequate protection of control systems.
“If a hacker gets into one of those industrial networks, he can do whatever he wants to do in that network,” Perelman said.
In the New York case, the Iranians hackers responsible for the incident did not damage the control systems. “The fact that they didn’t was a matter of choice and not capability,” he observed.
That’s not a reason to breathe easily, however. Many times nation-states mounting an infrastructure attack will leave behind a hidden Christmas present.
w”They leave behind a ‘red button’ capability,” Perelman explained. “If they need that capability in the future for either negotiation or an act of aggression, they can press the button and cause physical damage.”
He added that industrial systems that cling to legacy hardware make a hacker’s job easier.
Outside the world of industrial software, a company like Microsoft will patch its products every month and roll out a new operating system every two years. With Windows 10, it will patch and upgrade its software even faster than that through auto-updating.
“When you go to an industrial network, you usually will see the same industrial controllers that were installed in the ’90s,” Perelman said.
“Those controllers were designed when security wasn’t in anyone’s state of mind,” he added.
Chief Data Officer
Chief data officers have been around for less than a decade, but with the increased role of data in contributing to many a corporation’s bottom line, they’ve been growing in popularity. In fact,Gartner is predicting that by 2019, 90 percent of global enterprises will have a CDO.
In addition to wrangling data, another factor may be spurring the creation of CDOs: security.
Because of the proliferation of data breaches, corporate executives are trying to figure out who within their organizations is best equipped to address that problem. Typically, they turn to CIOs, but CIOs can’t do the job alone. They need help.
“That help isn’t going to come from the CSO or CISO because those officers are focused on information systems. They’re focused on firewalls, antivirus and other things to keep hackers out,” said Todd Feinman, CEO ofIdentity Finder.
“Data is a different layer,” he told TechNewsWorld. “It’s not about preventing hackers from breaking in.”
CIOs are becoming resigned to the fact that their systems will be penetrated. If that’s the case, it raises the question, “How do I minimize the damage when it happens?”
The CDO can answer that question by identifying ways to protect data.
For example, the CDO can monitor who has access to data and who should have access to data. While general access controls typically are handled by people with “security” in their title, CDOs are in a better position to determine granular access to data because they understand the data and who really needs access to it.
The same is true for obsolete data. “If I have a file that no one has used in the last five years, why am I keeping it around?” Feinman asked. “By keeping it around, it becomes a liability. It’s something sitting around waiting to be stolen.”
The CDO is also in a better position to impose a data regimen that can reduce the risk of high-value information being compromised.
For example, during the Sony Pictures Entertainment data breach in 2014, hackers stole thousands of Social Security numbers in hundreds of files.
“A chief data officer might have looked at that and said, ‘Our footprint for the quantity of SSNs that we store in multiple locations is creating a very high likelihood that if we ever get broken into, they’re going to get stolen,'” Feinman said.
“All SSNs should be in one place,” he continued. “Then there’s maybe a 1 percent chance that if we get broken into, someone will find the Social Security numbers.”
iOS 9.2 Security
If you’re an iPhone user, have you upgraded to iOS 9.2 yet? If not — and you’re concerned about security — you should not procrastinate any longer.
The new version of iOS has more than 50 security patches. Although the security flaws the patches address vary in severity, you will have to install all of them at once.
“The iOS platform is unique when comparing to it other major software vendors such as Microsoft in that you cannot pick and choose which security updates to apply,” noted Travis Smith, a senior security research engineer atTripwire.
“You either must apply all or none, meaning that to the end user, there is no single security fix that is more important than the others,” he told TechNewsWorld.
That’s not such a bad thing, when you consider how challenging it has been to exploit iOS in the past and black-hat behavior once a batch of patches is released.
“Given the fact that iOS devices are notoriously difficult to successfully exploit, it’s wise to consider any known vulnerability as important,” Smith said.
“With the announcement of these vulnerabilities, bad guys are able to hone in their efforts to areas of the device which are vulnerable,” he added.
- Dec. 27. Quincy Credit Union suspends ATM and debit card access to its banking system after discovering an ATM skimming scam that affected at least 670 customers.
- Dec. 28. Security researchers discover a database containing information on 191 million voters and accessible to the public for free on the Internet. Making such information public could violate laws in some states.
- Dec. 29. British Columbia reaches a cash settlement for an undisclosed amount in a wrongful dismissal lawsuit two health researchers filed. They were among eight fired following a data breach at a health research agency.
- Dec. 30. Microsoft reveals it has adopted a policy to notify its email customers when it suspects their accounts are under attack from a nation-state.
- Dec. 30. Hillsides, a child welfare agency, notifies nearly 1,000 clients and staff that their personal information is at risk after it was discovered that a former employee on five occasions sent unencrypted files containing the information to email addresses unaffiliated with Hillsides.
- Dec. 31. Keller Rohrback files a class-action lawsuit against VTech Electronics North America over a breach that exposed the data of more than 10 million parents, legal guardians and minor children.
- Dec. 31. BBC websites were unavailable for several hours in what appears to be a distributed denial-of-service attack. A group calling itself New World Hacking later claimed responsibility for the attack.
- Dec. 31. CCH Group reports IRS exempts from taxation identity prevention protection services given employees or others before a data breach occurs.
- Dec. 31. The State Department releases less than 65 percent of the 4,800 messages from Hillary Clinton’s private email server previously ordered released by a federal judge. Of the emails released, 8.6 percent were redacted in whole or in part.
- Jan. 1. Network security company Cyberoam confirms a data breach of its systems. A security researcher reported 100 million records from Cyberoam were being offered for sale on the dark Web for 100 bitcoins (US$43,000).
Upcoming Security Events
- Jan. 14. PrivacyCon. Constitution Center, 400 7th St. SW, Washington, D.C. Sponsored by Federal Trade Commission. Free.
- Jan. 16. B-Sides New York City. John Jay College of Criminal Justice, 524 West 59th St., New York. Free.
- Jan. 18. B-Sides Columbus. Doctors Hospital West, 5100 W Broad St., Columbus, Ohio. Registration: $25.
- Jan. 21. From Malicious to Unintentional — Combating Insider Threats. 1:30 p.m. ET. Webinar sponsored by MeriTalk , DLT and Symantec. Free with registration.
- Jan. 22. B-Sides Lagos. Sheraton Hotels, 30 Mobolaji Bank Anthony Way, Airport Road, Ikeja, Lagos, Nigeria. Free.
- Jan. 26. Cyber Security: The Business View. 11 a.m. ET. Dark Reading webinar. Free with registration.
- Jan. 28. State of the Phish — A 360-Degree View. 1 p.m. ET. Webinar sponsored sponsored by Wombat Security Technologies. Free with registration.
- Feb. 5-6. B-Sides Huntsville. Dynetics, 1004 Explorer Blvd., Huntsville, Alabama. Free.
- Feb. 16. Architecting the Holy Grail of Network Security. 1 p.m. ET. Webinar sponsored by Spikes Security. Free with registration.
- March 18. Gartner Identity and Access Management Summit. London, UK. Registration: before Jan 23, 2,225 euros plus VAT; after Jan. 22, 2,550 euros plus VAT; public sector. $1,950 plus VAT.
- June 13-16. Gartner Security & Risk Management Summit. Gaylord National Resort & Convention Center, 201 Waterfront St., National Harbor, Maryland. Registration: before April 16, $2,950; after April 15, $3,150; public sector, $2,595. | <urn:uuid:207bb9cd-f23e-4093-9c15-414ec33f4b2c> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/iranian-cyberattack-on-american-dam-viewed-as-rarity-82945.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00378.warc.gz | en | 0.934988 | 2,344 | 2.546875 | 3 |
Written By Pravin Mehta
Updated on July 22, 2022
Min Reading 3 Min
Cyber-attacks have become more frequent in the last few years, and all organizations that store data must have a comprehensive data protection strategy to mitigate the risks arising due to vulnerabilities.
Vulnerability and its manifold triggers have alarmed Network Administrators and System Administrators, alike. And, with digital transformation, vulnerability has also emerged as a serious security concern for Data Administrators. The organizations’ growing focus on digital transformation, in general, has largely pushed the Big Data realm with the massive generation of user data which is difficult to manage and secure. Consequently, this has exposed numerous loopholes for cyber attackers to gain unauthorized access to sensitive information on a network or standalone system.
The term vulnerability defines an underlying weakness associated with a system, which if not patched in time, exposes the system to a potential threat. For example, failing to patch Windows updates on a Web server is a vulnerability.
Data is continuously exposed to cybersecurity threats due to several types of vulnerabilities which manifest in the following stages:
Most businesses have heterogeneous systems with multiplatform automated patching to guard the networks and systems closely. But, sometimes, the administrators are unable to assess the type of vulnerability, which initiates a vast majority of threats due to unpatched networks and systems. A small vulnerability at an entry-level network, when left unattended, may thus turn out to be the most feasible loophole for malicious attacks on an organization.
Therefore, plugging these vulnerabilities -before they get traced by a malicious entity -is one of the best preventive measures to protect data and stop such entry-level network threats from branching out to multiple risks.
1. Business Downtime: The downtime or outage happens when a system becomes unavailable for a certain duration and fails to perform its primary function. To restore a compromised system from scratch, the business has to invest resources, which causes upfront loss.
Downtime also leads to business disruption when critical IT systems are involved, especially the database where there are higher chances of organizational data being compromised. According to Ponemon's cost of data breach study, organizations based in the US can recover some of the highest post-breach response costs.
2. Data loss: Data encryption by ransomware might cause permanent loss of data, thus, compromising strategic advantage and affecting brand reputation and overall business health. In cases of encryption, you need data recovery software like Stellar Data Recovery. Data loss prevention could have been possible if the organizations had applied timely patches.
3. Data privacy and legal implications: Unauthorized 3rd party data access affects the confidentiality, integrity, and availability of organizational data, thereby compromising data privacy. In today’s context, non-compliance with data privacy regulations such as the GDPR might lead to legal complications.
This is possible! By taking adequate preemptive measures at —
1. Network Level Security:
2. System Level Security:
3. Data Security:
While Data Protection is largely based on implementing preventive measures and practices, Data Disposition is concerned with the safe disposal of redundant or undesired data. And there are regulatory policies to govern safe data disposition, which mandate organizations to comply with privacy and security standards. In this regard, data erasure software such as BitRaser is used by Enterprises to perform secure data erasure in line with international standards. BitRaser erases sensitive data in an efficient, cost-effective, secure, and socially responsible manner during the recycling or relocation of data assets.
Not just for erasure, a certified data erasure software like BitRaser helps organizations to plug vulnerabilities and protect their network and systems from many threats arising out of data exposure.
BitRaser is NIST Certified
|US Department of Defense, DoD 5220.22-M (3 passes)|
|US Department of Defense, DoD 5200.22-M (ECE) (7 passes)|
|US Department of Defense, DoD 5200.28-STD (7 passes)|
|Russian Standard – GOST-R-50739-95 (2 passes)|
|B.Schneier’s algorithm (7 passes)|
|German Standard VSITR (7 passes)|
|Peter Gutmann (35 passes)|
|US Army AR 380-19 (3 passes)|
|North Atlantic Treaty Organization-NATO Standard (7 passes)|
|US Air Force AFSSI 5020 (3 passes)|
|Pfitzner algorithm (33 passes)|
|Canadian RCMP TSSIT OPS-II (4 passes)|
|British HMG IS5 (3 passes)|
|Pseudo-random & Zeroes (2 passes)|
|Random Random Zero (6 passes)|
|British HMG IS5 Baseline standard|
|NAVSO P-5239-26 (3 passes)|
|NCSG-TG-025 (3 passes)|
|5 Customized Algorithms & more| | <urn:uuid:1af3ff32-d752-4ca3-9bfb-59d3ca680630> | CC-MAIN-2022-40 | https://www.bitraser.com/article/stages-of-data-vulnerability-risks.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00578.warc.gz | en | 0.90748 | 1,062 | 2.765625 | 3 |
What is a Digital Twin? Background The idea of digital twins was raised up decades ago while it has been rapidly developed these days and more possibilities have been proven in a variety of industries. With the growth and mature of new technologies like IoT, AI, and AR/VR, digital twin technology has become one of the crucial roles in the next wave of digitalization.What is a digital twin? A digital twin is a virtual representation of both elements and the dynamics of how an IoT device operates and lives throughout its lifecycle (the design, the build, and the operations). It is not just a copy or a simulation but featuring with the continuous connections between the physical model and the virtual one. There are real-time sensors to constantly transmit information and data. In addition, reaction would be generated based on the analysis and decision-making to optimize and increase the value of the product/procedure/management.How to build digital twins? To build digital twins, generating the virtual model is the first step. Next, the virtual perception mechanism needs to set up via the collaboration of IoT and AR/VR devices. As the information is shared in real-time, analysis and prediction can be completed with extraordinary efficiency and accuracy. To adapt the new technologies with solid strategy, the overall value of digital twins can be enlarged.How digital twins benefit our lives? Based on the features of digital twins, numerous applications in various industries have been carried forward. Below are some of the representatives to show the capabilities and the potential of digital twins.-Manufacturing Each step of the production monitored can be synchronized with digital twins in real-time. With the adjustment per analysis, digital twins implement precision manufacturing. It can not only help to improve the overall productivity by tracking and controlling the systems digitally with higher efficiency, but also reduce the duration of development or further optimize the procedure with predictive analytics/process automation. From design to finished product, high yield/quality, better maintenance, and lower cost can be achieved with digital twins.-Automobile Creating a virtual model of a vehicle, the behavioral and operational data can be captured to analyze the overall vehicle performance for advanced user experience. It helps to accelerate the evolution of autonomous driving. In the past years, setting up a zone for autopilot testing is time-consuming and require intensive investment. With digital twins, testing the capability of self-driving in the virtual world can lower TCO and make the program more secure. With diverse sensors connected, real-time transmission makes it nearly the same as real world to mimic the vehicle’s auto-driving for better product development with reduced cost and time.-Medicine Regarding small scale of specific material, chemical reaction, or interaction between drugs, digital twins can not only benefit medical research, but also help with clinical treatment. In new drug development, adopting digital twins can improve the design and make verification test more efficient to generate effective medicine in a shorter period. As for optimizing the drug dosage for individual, personalized prediction can be generated by digital twins with intelligent analysis of patients’ characteristics such as age, lifestyle, medical historical records, DNA, etc. Moreover, real-time calibration can help to ensure digital twin accuracy.-Smart City Digital twin technology is not limited to small/medium scale, larger scale like smart city can also be accomplished. With the tremendous amount of data collected from various sensors and intelligent devices, digital twins can bring about insights and predictions for enhanced management of the resources to raise overall quality of life. It not only includes outstanding convenience with smart traffic/intelligent parking system, but also the sustainability with the efficient management of energy/ resources through AIoT to better maintain the environment.Conclusion Along with the advancement of the new technologies of IoT, AI, and VR, digital twins grow to the market and has more and more valuable applications. Generating precise virtual representations with dynamic data inputs for real-time simulation, digital twins offer predictive analytics and automation to help various industries reduce TCO, make extraordinary productivity, realize advanced management/maintenance, and build a better life with innovative ones.To experience digital twin technology, you would require a reliable Edge AI Server with high performance and scalability to process the real-time computing with the flexibility to expand to further function per use cases. AEWIN provide the exact hardware you need, please don’t hesitate to contact our friendly sales for more information.– SCB-1932C: Dual Intel 3rd Gen Xeon SP 2U platform with 2x GbE, 4x PCIe slots for NICs, and 2x PCIe slots for dual-width GPU/FPGA. – SCB-1937C: Dual AMD EPYC 7000 series 2U platform with 2x GbE and 4x PCIe slots for NICs, 4x PCIe slots for NICs, and 2x PCIe slots for dual-width GPU/FPGA. – BIS-3101: Desktop Workstation with Intel 8th/9th Core i CPU and PCIe slot for dual width GPU/FPGA. | <urn:uuid:f070e714-098f-4fee-9f93-b348947a3a83> | CC-MAIN-2022-40 | https://www.aewin.com/application/what-is-a-digital-twin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00578.warc.gz | en | 0.916378 | 1,011 | 3.265625 | 3 |
All Internet standards and many other Internet specifications are documented in a series of documents called Request for Comments or RFCs. While the Internet standards may be the most famous of Internet documents, they are far from the only ones.
And, while all Internet standards are documented in RFCs, NOT ALL RFCs ARE INTERNET STANDARDS
Gordano’s products conform to the following internet standards:
Standard 10 ? RFCs 821, 1869, 1870
Standard 11 ? RFC 822
Standard 13 ? RFCs 1034, 1035
Standard 53 ? RFCs 1939
The different type of documents relating to the internet are:
- RFCs. The Request for Comments series is archival, meaning that no RFC will ever change. They are meant to always be available in their original format, even if their status may change over time as a specification moves from being a proposed standard to a draft standard to an Internet standard to an historical RFC.
- STDs. These are Internet standards, and each STD points to one or more RFCs that contain the specification(s) for that particular standard. The STD number stays the same even if a new RFC replaces (obsoletes) an old RFC defining the standard.
- FYIs. These are “for your information” documents which, according to RFC 1150, “F.Y.I. Introduction to the F.Y.I. Notes”, are intended to provide information about Internet topics, including answers to frequently asked questions and explanations of why things are the way they are on the Internet.
- BCPs. Defined in RFC 1818, the Best Current Practices series describe current practices for the Internet community. They provide a mechanism by which the IETF distributes information about what are considered to be the best ways of doing things; these mechanisms need not become standards either because they may change over time or they refer to administrative or other areas outside of the technology. BCPs also cover meta-issues, such as describing the process by which standards are created (see RFC 2026, for example, on the Internet standards process).
- Others. Over time there have been other document series, including RTRs (RARE Technical Reports), IENs (Internet Engineering Notes), and others. Mostly, nobody cares about these anymore.
Keywords:STD Internet standards 10 11 13 53 | <urn:uuid:0d49fd8e-967e-4d80-bfdb-393302d10bcf> | CC-MAIN-2022-40 | https://www.gordano.com/knowledge-base/what-is-an-internet-standard-std/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00578.warc.gz | en | 0.914561 | 490 | 2.6875 | 3 |
Just when you think MIT had developed everything achievable with modern technology they announce yet another innovation to change the face of medicine and healthcare
Engineers from MIT have developed a tiny robot which can move through narrow pathways. The wormlike guidewire, or Robo-thread, is made from a nickel-titanium alloy and is magnetically steerable.
The engineers tested the threadlike robot in a life-size replica of the human brain. With pinpoint accuracy, they were able to remotely guide it through the circuitous, winding vasculature of the brain model using large magnets.
The robot is tipped for use in endovascular procedures alongside existing tech. The scientists and engineers behind the development, led by Xuanhe Zhao, envisage its potential uses in the future to include clearing blood-clots and administering medicines.
Speaking of a potential use treating strokes, Zhao said: “If acute stroke can be treated within the first 90 minutes or so, patients’ survival rates could increase significantly…If we could design a device to reverse blood vessel blockage within this ‘golden hour,’ we could potentially avoid permanent brain damage. That’s our hope.”
Guidewires like this are currently in use to treat blockages and lesions in blood vessels, but they are operated manually and thus leave doctors open to the risk of radiation from fluoroscopy.
The team at MIT understood that their development could mitigate this risk, by allowing the soft robot to be manipulated remotely, rather than a surgeon manually manoeuvring a guidewire through the blood vessels.
Yoonho Kim, the lead author of the paper detailing their development, said: “Existing platforms could apply magnetic field and do the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick.”
MIT’s thread-like robot becomes the latest in a string of innovations which aim to take surgery and medical procedures remote.
Shadow Robot Company have been working on a robotic hand which could potentially allow surgeons to operate from afar. The Tactile Telerobot has already been used to move chess pieces from over 5,000 miles away.
Could we see a future in which operating theatres are staffed entirely by robots and remote surgeons? | <urn:uuid:c4a072f7-c8e4-4d8d-a381-e42406c1c961> | CC-MAIN-2022-40 | https://tbtech.co/news/mit-developed-a-robot-that-swims-through-your-veins/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00578.warc.gz | en | 0.95377 | 487 | 3.1875 | 3 |
What it’s mean ?
DMARC – is an improved standard to protect your brand name – stands for “Domain-based Message Authentication, Reporting & Conformance”, is an email authentication, policy, and reporting protocol. It builds on the widely deployed SPF and DKIM protocols.
If you want to protect your domain with DMARC or use DMARC to filter spam and you use Office 365 Exchange online protection, note that Microsoft decided to alter normal DMARC policy. Imagine a domain protects itself and a forged message was identified as DMARC=fail and policy is set to reject with 100%. DMARC policy example: v=DMARC1;p=reject;pct=100
In this case Office 365 will ignore reject and will deliver email marked as spam. A header will contain “dmarc=fail action=oreject” (oreject being overwritten reject.)
Here is how Microsoft justifies this design decision
“If the DMARC policy of the sending server is p=reject, EOP marks the message as spam instead of rejecting it. In other words, for inbound email, Office 365 treats p=reject and p=quarantine the same way.
Office 365 is configured like this because some legitimate email may fail DMARC. For example, a message might fail DMARC if it is sent to a mailing list that then relays the message to all list participants. If Office 365 rejected these messages, people could lose legitimate email and have no way to retrieve it. Instead, these messages will still fail DMARC but they will be marked as spam and not rejected. If desired, users can still get these messages in their inbox through these methods:
- Users add safe senders individually by using their email client
- Administrators create an Exchange mail flow rule (also known as a transport rule) for all users that allows messages for those particular senders.”
Let us go farther together!
Whether for a simple question or suggestion, we are at your disposal to answer it by email or by phone. | <urn:uuid:e5460c83-e64c-4f5b-95eb-dd4383e9cf8b> | CC-MAIN-2022-40 | https://www.lambertconsulting.ch/it/domain-based-message-authentification-reporting-conformance-dmarc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00578.warc.gz | en | 0.89957 | 426 | 2.75 | 3 |
Edge computing is an IT deployment designed to put applications and data as close as possible to the users or “things” that need them.
What is Edge Computing?
Edge Computing vs Fog Computing
Edge Computing vs Cloud Computing
Why is Edge Computing Necessary?
Edge computing is necessary to address shortcomings in cloud-based applications and services with respect to performance and regulatory requirements. In short, cloud computing can’t always meet the required demands in terms of response time that critical applications require. Companies that face government regulations regarding where data is stored may also find cloud computing can’t deliver the sort of local storage they need.
It’s an issue because the trend toward digitization to improve efficiency and business performance is fueling demand for applications that require peak performance, particularly Internet of Things (IoT) applications. IoT applications often require lots of bandwidth, low latency, and reliable performance while meeting regulatory and compliance mandates, making them classic candidates for the edge.
Deploying Edge Data Centers
While edge computing deployments can take many forms, they generally fall into one of three categories:
1. Local devices that serve a specific purpose, such as an appliance that runs a building’s security system or a cloud storage gateway that integrates an online storage service with premise-based systems, facilitating data transfers between them.
2. Small, localized data centers (1 to 10 racks) that offer significant processing and storage capabilities.
3. Regional data centers with more than 10 racks that serve relatively large local user populations.
Regardless of size, each of these edge examples is important to the business, so maximizing availability is essential.
It’s critical then, that companies build edge data centers with the same attention to reliability and security as they would for a large, centralized data center. This site is intended to provide the information you need to build secure, reliable, and manageable high-performance edge data centers that can help fuel your organization’s digital transformation.
How IoT is Driving the Need for Computing at the Edge
The IoT involves collecting data from various sensors and devices and applying algorithms to the data to glean insights that deliver business benefits. Industries ranging from manufacturing, utility distribution, traffic management to retail, medical and even education are making use of the technology to improve customer satisfaction, reduce costs, improve security and operations, and enrich the end user experience, to name a few benefits.
A retailer, for example, may use data from IoT applications to better serve customers, by anticipating what they may want based on past purchases, offering on-the-spot discounts, and improving their own customer service groups. For industrial environments, IoT applications can be used to support preventive maintenance programs by providing the ability to detect when the performance of a machine varies from an established baseline, indicating it’s in need of maintenance.
The list of potential use cases is virtually endless, but they all have one thing in common: collecting lots of data from many sensors and smart devices and using it to drive business improvements.
Many IoT applications rely on cloud-based resources for compute power, data storage and application intelligence that yields business insights. However, it’s often not optimal to send all the data generated by sensors and devices directly to the cloud, for reasons that generally come down to bandwidth, latency and regulatory requirements.
The 3 Main Reasons Why Edge Computing is Needed in IoT Applications
The volume of data some IoT applications create can be staggering, similar to the costs associated with sending it all to the cloud, making local processing more practical and beneficial. It’s also a gating factor for any application that requires streaming large amounts of content, including high-definition video that may be used in oil and gas exploration applications.
Some applications require extremely low latency, which is the time it takes a data packet to travel to its destination and back. Any application having to do with safety, for example – such as driverless cars, healthcare or industrial plant floor applications – require near instantaneous response time. Cloud services are not optimal in such cases due to the delay inherent in the round-trip to a centralized service.
In highly regulated industries and regions (such as in Europe with the General Data Protection Regulation, GDPR), the way in which personal information is handled is tightly controlled, including where it is stored and how it is transmitted, driving the need for localized data centers.
In all these instances and more, edge deployments are critical in addressing these issues.
Examples of Digital Transformation Benefits
Of course, all IT is about addressing business requirements, and edge is no different. Edge computing is helping businesses as they embark on digital transformations and use IoT applications to improve the customer experience and operational efficiency as well as develop new revenue streams.Improving the customer experience
Customers see examples of IoT applications all around them. Digital signage improves their retail shopping and transportation experiences. Industrial field service personnel use augmented reality applications to help them more easily service complicated machines and devices. You can now do most of your banking from your phone and have your healthcare devices monitored from afar. IoT applications are making life easier for customers in just about every walk of life.Improving operational efficiency
IoT applications help improve operational efficiency in areas such as predictive maintenance for all sorts of machinery and equipment, be it in industrial environments or data centers, to rectify issues before they cause downtime. Radio Frequency Identification (RFID) tracking helps retailers with inventory management and loss prevention, and enables healthcare providers to track expensive equipment, such as computers on wheels carts. Cities use IoT applications to monitor busy intersections and control traffic lights to reduce traffic jams. Indeed, improving operational efficiency is probably the biggest single reason companies deploy IoT applications.Develop new revenue streams
Entirely new industries are cropping up based on IoT technology. Uber and Lyft wouldn’t be possible without it, nor would short-term bicycle and scooter rental services. Logistics companies can offer new services based on their ability to provide the real-time status of where containers are and whether climate controls are working properly. Predictive maintenance services that are valuable to customers also mean new revenue for manufacturers and service providers. A slew of home monitoring services now exist that rely on a series of sensors and Internet connectivity. Healthcare providers can now offer “digital hospital” services including remote device monitoring and analysis.
Examples Across Industries
Any company, in any industry, can apply IoT technology and edge computing to develop new revenue streams as well as improve customer experiences and operational efficiency. The principle behind the applications is the same, regardless of the exact implementation: devices or sensors at one end sending data to an edge data center for processing and perhaps some analytics, then to a more centralized application (often in the cloud) that delivers the promised benefit to the company.
It is clear that some vertical industries are emerging as early adopters of IoT technology and implementing successful applications. The lessons they’re learning apply to other verticals as well, so examining where they’ve had success can help fuel ideas for leaders in other industries.
Addressing Challenges at the Edge
To realize the benefits that IoT applications promise, however, requires that edge data centers have the performance and reliability that the applications demand. That presents some challenges, because edge data centers can be located literally anywhere: in a wiring closet or server room, in an office populated with employees, in a retail establishment full of employees and customers, or in a harsh outdoor environment.
No matter where it is located, ensuring reliability and performance of edge data centers involves addressing three key requirements: remote management, rapid and standardized deployment, and physical security.
Most edge data centers have few to no IT staff on site to manage them, whether it’s a remote, outdoor facility driving utility IoT applications or a retailer with hundreds of stores. In such instances, the ability to remotely manage and service the edge components is critical. Maintenance needs to be predictive and proactive, to ensure the site has no downtime and to reduce the cost of service calls. A cloud-based management platform that takes advantage of intelligent analytics applications can be an effective solution.Find out more about remote management of edge computing
Standardized and Rapid Deployments
Given the large number of edge data centers that many organizations are going to have, it’s important they be delivered in a standardized, repeatable and rapid manner. The alternative – a series of ad hoc IT deployments – creates a nightmare scenario for both speed of deployment and ongoing management.
The solution here involves using a reference architecture that ensures consistency in each edge deployment. Such architectures define a baseline level of devices and services, while allowing for some variation depending on the requirements of each location. Even better is to have a finite number of reference designs from which to choose for each site, to ensure consistency.
Prefabricated, modular micro data centers are often a good solution for edge data centers. They include all the required power and cooling infrastructure as well as management software. It’s all pre-integrated and installed in a rack or enclosure, ready to accept IT equipment – which is typically installed by an IT solution provider or systems integrator. Some micro data centers are also certified by leading converged and hyperconverged IT equipment manufacturers, which helps ensure good performance and reliability.Read more about standardized and rapid deployments of edge computing
Edge data centers may be located in server rooms and IT closets, under cash registers or desks. Even if they are in a dedicated room, it may not be secured. This leaves the edge infrastructure open to accidental damage, attack from nefarious actors who intend to do harm, as well as employees with good intentions who simply don’t know any better.
Providing proper physical security requires three components:
Monitoring the physical space, using sensors that can report on temperature and humidity levels, and detect environmental changes caused by fire, smoke, flooding or the like.
Control over the space, to ensure only authorized personnel have access to edge infrastructure.
Supervision of the environment using audio and video, with recording, so you can visually see who is accessing edge spaces.
Perhaps not surprisingly, those three elements figured prominently among respondents to an IDC survey* about the top concerns over edge deployments. Issues around security, monitoring and controlling access to the physical space accounted for five of the top six concerns the 200+ respondents had about edge computing.Find out more about physical security at the edge
* IDC, Edge Computing: The Next Stage of Datacenter Evolution, April 2018.
Author: Jamie Bourassa
Vice President of Edge Computing & Channel Strategy for the Secure Power Division of Schneider Electric
Jamie is responsible for enabling the Secure Power Division commercial strategy and ensuring that Schneider Electric aligns to the market evolutions related to Edge Compute, IoT, and other disruptions that increase the criticality of local computing for customers across all commercial and industrial segments. With a global career in IT Channels Strategy, Sales Operations and Offer Management, Jamie brings a unique set of competencies needed in evaluating and delivering on the current disruptions in the market.Read more of Jamie Bourassa’s Edge content | <urn:uuid:6cdee88f-fe92-4b5f-a062-a0e0e15fd6a3> | CC-MAIN-2022-40 | https://www.apc.com/dk/en/solutions/business-solutions/edge-computing/what-is-edge-computing.jsp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00578.warc.gz | en | 0.930827 | 2,278 | 3.03125 | 3 |
Facebook today announced a new cooling system that will make some of its data centers more water and energy-efficient.
The social network, which prides itself in building environmentally friendly data centers, cools most of them using outdoor air and direct evaporative cooling systems. But in some locations, they must use indirect cooling systems to protect IT equipment from harsh environmental conditions, such as high levels of dust, extreme humidity, or elevated salinity, according to a Tuesday blog post.
In the blog post, Facebook unveiled a new indirect cooling technology it plans to start using in data centers it builds in the future. It features an advanced evaporative cooling system that uses water instead of air to cool data centers.
“When deployed, the new cooling will allow us to build highly water- and energy-efficient Facebook data centers in places where direct cooling is not feasible,” Veerendra Mulay, Facebook’s research and development mechanical engineer, wrote. “Based on our testing in several different locations, we anticipate the (new) system can reduce water usage by more than 20 percent for data centers in hot and humid climates and by almost 90 percent in cooler climates in comparison with previous indirect cooling systems.”
Using outside air for data center cooling, the approach known as “free cooling,” has been one of the biggest ways to reduce data center energy use. But air quality makes the approach almost infeasible. Data center operators in China, for example, have struggled for years with this issue, as high concentration of air pollutants in the country’s east have led to higher IT equipment failure rates.
Facebook currently uses indirect cooling systems in two locations worldwide, Mulay said in an interview with Data Center Knowledge. The new indirect cooling system – called the StatePoint Liquid Cooling (SPLC) system – will allow the company to consider building water- and energy-efficient data centers in areas that might not have been feasible before, he said.
“We evaluate the needs on a site-by-site basis,” he said about the choice of data center cooling design Facebook uses in each new location. “We look at the climate, the salinity in the air, and other conditions. But if we go indirect, this new technology is what we will use.”
Mulay said he expects data centers that use indirect cooling systems will continue to be a small percentage of Facebook’s overall data center count, but it will grow over time when it makes sense for the business.
“This is a huge step in our commitment to sustainability,” he added.
How the Cooling Technology Works
Facebook co-developed the new data center cooling technology with Nortek Air Solutions, a manufacturer of custom commercial HVAC systems. The two companies began work on what became SPLC in 2015.
Previous indirect cooling systems use two different air loops: an outside air loop and a processed data center air loop, Mulay said. The outside air is first cooled by evaporative cooling, and then that cold air cools the processed air used to cool the data center equipment.
With SPLC, a new loop – the “processed water loop” – is introduced. Instead of the traditional approach of using water to cool air, the new design uses air to cool water, which results in lower water consumption, according to Mulay.
The heart of the SPLC system is a liquid-to-air energy exchanger, where water is cooled as it evaporates through a membrane separation layer, he wrote in the blog post. The cold water then cools the air inside the data center and keeps servers at optimal temperatures.
“What happens is that your energy exchange happens in two places, instead of just one place,” Mulay explained in the interview. “You use outside air to produce cold water. That cold water then goes to your data center, where you will use the cold water to cool the processed air, and then the processed air is sent to the servers, so they can be cooled.”
The SPLC technology works in three modes to optimize water and power consumption.
“When outside air temperatures are low, the SPLC’s most energy- and water-efficient mode uses that air to produce cold water. When outside air temperatures rise, the SPLC system will operate in an adiabatic mode, in which the system engages the heat exchanger to cool the warm outside air before it goes into the recovery coil to produce cold water,” Mulay wrote. “In hot and humid weather, the SPLC will operate in super-evaporative mode, where outside air is cooled by a pre-cooling coil and then used to produce cold water.”
Nortek owns the patent for the SPLC technology. That means other companies – not only Facebook – can contact Nortek to use it in their data centers, Mulay added. | <urn:uuid:4b43b601-ff11-436e-b6f8-c727e56ebcaa> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/facebook/facebook-s-new-data-center-cooling-design-means-it-can-build-more-places | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00578.warc.gz | en | 0.935171 | 1,018 | 2.59375 | 3 |
Constructing an experimental wave drive tank
September eighth, 2022—
There are lots of methods to make a factor ambulatory past merely slapping on some wheels. James Bruton usually experiments with these distinctive drive mechanisms, whether or not they’re sensible or not. In his latest video, he constructed what he calls a “wave drive” to propel a tank-like robotic. This experimental wave drive tank incorporates a 3D-printed physique and distant Arduino-based management.
This drive mechanism works utilizing movement much like somebody doing the worm dance transfer, which could be very very like how flatworms swim by means of water in nature. For a extra technical visualization, think about a spinning helix projected onto a 2D airplane. The outcome seems to be like a sine wave, therefore the identify. The underside of the wave makes contact with the bottom and friction offers grip, letting the mechanism roll ahead. That helix visualization additionally mirrors the bodily implementation right here, as a screw-shaped drive shaft guides tracks because it spins.
Bruton 3D-printed nearly each bodily a part of this robotic, with the key exception being the helical steel rods. These rods spin on bearings and an Arduino Mega 2560 controls their 12V DC motors by means of driver boards. As with extra typical tank tracks, ahead or reverse motion happen when each motors spin in the identical route. To rotate the robotic, the motors simply have to spin in reverse instructions. The Arduino can vector motor route and velocity in response to throttle and steering inputs from Bruton’s customized distant management.
As Bruton demonstrates within the video, this wave drive works – but it surely doesn’t work very properly. It’s sluggish, inefficient, tough to regulate, and has a tough time overcoming obstacles. That is smart, since this motion is best suited to ambulation in viscous fluid. Even so, it’s nice to see Bruton testing the actual world practicality of one other unconventional drive mechanism. | <urn:uuid:b2b3ca6a-9576-4078-a8ae-ef4cb3bcac77> | CC-MAIN-2022-40 | https://blingeach.com/constructing-an-experimental-wave-drive-tank/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00578.warc.gz | en | 0.916294 | 410 | 2.53125 | 3 |
The ocean ground is famously much less explored than the floor of Mars. And when our crew of scientists just lately mapped the seabed, and historical sediments beneath, we found what appears like an asteroid influence crater.
Intriguingly, the crater, named “Nadir” after the close by volcano Nadir Seamount, is of the identical age because the Chicxulub influence brought on by an enormous asteroid on the finish of the Cretaceous interval, round 66 million years in the past, which worn out the dinosaurs and plenty of different species.
The discovering, printed in Science Advances, raises the query of whether or not the crater is perhaps associated to Chicxulub in a roundabout way. If confirmed, it will even be of big common scientific curiosity as it will be one in every of a really small variety of identified marine asteroid impacts and so give distinctive new insights into what occurs throughout such a collision.
The crater was recognized utilizing “seismic reflection” as a part of a wider mission to reconstruct the tectonic separation of South America from Africa again within the Cretaceous interval. Seismic reflection works in an analogous method to ultrasound, sending stress waves via the ocean and its ground and detecting the power mirrored again. This knowledge permits geophysicists and geologists to reconstruct the structure of the rocks and sediments.
Scrolling via this knowledge on the finish of 2020, we got here throughout a extremely uncommon function. Among the many flat, layered sediments of the Guinea Plateau, west of Africa, was what gave the impression to be a big crater, a little bit beneath 10 kilometers broad and a number of other hundred meters deep, buried under a number of hundred meters of sediment.
Lots of its options are in keeping with an influence origin, together with the size of the crater, the ratio of peak to width, and the peak of the crater rim. The presence of chaotic deposits outdoors of the crater ground additionally appear to be “ejecta”—materials expelled from the crater instantly following a collision.
We did think about different doable processes that might have fashioned such a crater, such because the collapse of a submarine volcano or a pillar (or diapir) of salt under the seabed. An explosive launch of fuel from under the floor may be a trigger. However none of those prospects are in keeping with the native geology or the geometry of the crater.
Earthquakes, Air Blast, Fireball, and Tsunamis
After figuring out and characterizing the crater, we constructed pc fashions of an influence occasion to see if we may replicate the crater and characterize the asteroid and its influence.
The simulation that most closely fits the crater form is for an asteroid 400 meters in diameter hitting an ocean that was 800 meters deep. The results of an influence within the ocean at such water depths are dramatic. It could end in an 800-meter thick water column, in addition to the asteroid and a considerable quantity of sediment being immediately vaporized—with a big fireball seen tons of of kilometers away.
Shock waves from the influence can be equal to a magnitude 6.5 or 7 earthquake, which might possible set off underwater landslides across the area. A practice of tsunami waves would type.
The air blast from the explosion can be bigger than something heard on Earth in recorded historical past. The power launched can be roughly a thousand instances bigger than that from the latest Tonga eruption. Additionally it is doable that the stress waves within the environment would additional amplify the tsunami waves far-off from the crater.
One of the intriguing points of this crater is that it’s the similar age as the large Chicxulub occasion, give or take a million years, on the boundary between the Cretaceous and Paleogene intervals 66 million years in the past. Once more, if this actually is an influence crater, may there be some relationship between them?
We have now three concepts as to their doable relationship. The primary is that they may have fashioned from the break-up of a mother or father asteroid, with the bigger fragment ensuing within the Chicxulub occasion and a smaller fragment (the “little sister”) forming the Nadir crater. In that case, the damaging results of the Chicxulub influence may have been added to by the Nadir influence, exacerbating the severity of the mass extinction occasion.
The break-up occasion may have been brought on by an earlier near-collision, when the asteroid or comet handed shut sufficient to Earth to expertise gravitational forces robust sufficient to drag it aside. The precise collision may then have occurred on a subsequent orbit.
Though, that is much less possible for a rocky asteroid, this pull-apart is precisely what occurred to the Shoemaker-Levy 9 comet that collided with Jupiter again in 1994, when a number of comet fragments collided with the planet over the course of a number of days.
One other risk is that Nadir was a part of an extended lived “influence cluster,” fashioned by a collision within the asteroid belt earlier in photo voltaic system historical past. This is named the “little cousin” speculation.
This collision could have despatched a bathe of asteroids into the internal photo voltaic system, which can have collided with the Earth and different internal planets over a extra prolonged time interval, maybe 1,000,000 years or extra. We have now a precedent for such an occasion again within the Ordovician interval—over 400 million years in the past—when there have been quite a few influence occasions in a brief time period.
Lastly, after all, this will likely simply be a coincidence. We do anticipate a collision of a Nadir-sized asteroid each 700,000 years or so. For now, nonetheless, we can’t definitively state that the Nadir crater was fashioned by an asteroid influence till we bodily get well samples from the crater ground, and determine minerals that may solely be fashioned by excessive shock pressures. To that finish, we now have just lately submitted a proposal to drill the crater via the Worldwide Ocean Discovery Program.
As with the principle influence crater speculation, we will solely check the little sister and little cousin hypotheses by precisely relationship the crater utilizing these samples, in addition to by on the lookout for different candidate craters of an analogous age.
Maybe extra importantly, may such an occasion occur within the close to future? It’s unlikely, however the measurement of the asteroid that we mannequin is similar to the Bennu asteroid at present in near-Earth orbit. This asteroid is taken into account to be one of many two most hazardous objects within the photo voltaic system, with a 1-in-1,750 likelihood of collision with Earth within the subsequent couple of centuries.
Picture Credit score: NASA | <urn:uuid:f03d051b-6abb-4a02-a063-bdb8da4ebfbb> | CC-MAIN-2022-40 | https://blingeach.com/the-asteroid-that-wiped-out-the-dinosaurs-could-have-had-a-little-sister/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00578.warc.gz | en | 0.944298 | 1,421 | 3.4375 | 3 |
The automotive industry is rapidly changing due to the introduction of new technology in modern vehicles. Our cars are connected to complex networks hosting numerous sensors and smart components that are in turn connected to the Internet. For this reason, their attack surface increases dramatically.
Connected cars are complex systems composed of numerous units that exchange large amounts of data, and threat actors can manipulate those systems in order to gain control of smart vehicles.
Over the years, a growing number of security experts have focused their studies on car hacking, demonstrating with success how attackers could compromise the various components in the vehicles.
One of the most well-known attacks on a smart car is the remote hack of a Jeep demonstrated by the security duo Charlie Miller and Chris Valasek to raise awareness in the automotive industry about the possible risks related to cyberattacks.
Miller and Valasek exploited security flaws in the Uconnect automobile system with cellular connectivity to take over a 2014 Jeep Cherokee.
The hack shocked the automotive industry and provoked a public debate on the cyber risks of connected vehicles.
Security experts speculate that in some cases, car makers failed to implement adequate protections from cyberattacks. Threat actors could hack a vehicle to steal sensitive data managed by its components for sabotage purposes, or simply to steal it.
Connected cars can share information with other vehicles in C2C (car-to-car) or C2I (Car-to-Infrastructure) connections in real-time. In essence, they are becoming sophisticated nodes of the global network that manages massive amounts of information.
According to the study “Driving Security: Cyber Assurance for Next-Generation Vehicle,” the following aspects are essential when it comes to manufacturing secure connected cars:
Design secure cars. Security requirements are part of the early stage of the design process. Designers should focus on security, implementing protections against known threats for each component, subsystem, and network that the connected vehicle will be exposed to once it leaves the car maker's production line.
Create safe networks. Internal communications and communications with external entities should be encrypted. Car makers also have to design monitoring systems able to detect suspicious activities that could be potentially associated with attack patterns.
Vehicle hardening. Vendors have to harden their connected cars at all levels:
- Encryption of data at rest and data in motion
- Implementing proper cloud security controls
- Access control mechanisms
- Securing the operating system
- Penetration testing of the apps
The threat landscape in smart car security
Modern vehicles include interconnected electronics systems that could be targeted by threat actors for various purposes.
Today, cars are able to interact with the surrounding environment by exchanging data with control stations set up to provide a broad range of services to the populations of smart cities. The vehicles include sophisticated controllers that manage data collected through a network of sensors in real-time.
To have an idea of the complexity of a modern vehicle, let’s consider that a controller of a luxury car has more than 100 million lines of computer code, while software and electronics account for 40% of the cost of the car.
Every component in a car and its communication channels could potentially be targeted by cybercriminals.
What are the attack vectors for smart cars?
Security experts have demonstrated multiple attack techniques against connected cars. Some of them were also exploited by cybercriminals in real attack scenarios.
Here are the most popular car hacking techniques:
- Attacks against telematics systems. Telematics systems allow vehicles to communicate with a remote center and exchange with it the telemetry data and other information. Some car manufacturers already offer their customers telemetry services for a remote diagnostic that could prevent accidental crashes and electronic faults. Attackers could exploit vulnerabilities in these systems to potentially interfere with onboard components, and modify their parameters to alter the response of the vehicle to the driver’s orders.
- Malware exploits. An attacker could inject tailormade malware into some car components, modifying their behavior or triggering a Denial of Service condition. A malware program could be injected in different ways. For example, using a USB stick inserted into an MP3 reader or through wireless technology (wifi, Bluetooth, mobile communication).
- Unauthorized applications. On-board computers can download and execute applications and related updates. A threat actor could tamper with these applications to get malicious code executed on the target vehicle. In a classic supply chain attack, hackers could inject the car with a tainted update that, once installed and executed on the vehicle, could allow attackers to carry out malicious activities.
- OBD. Tailormade software could exploit the OBD-II (on-board diagnostics) port for installation. Once the connector is accessed via the CAN bus, it is possible to monitor every component connected to it.
- Door locks and key fobs. An attacker could emulate the presence of access code used by key fobs and door locks to control locks and start/stop for car engines.
Our vehicles are similar to a network of computers that communicate in an “unsecure” way on the internal bus. This means that hackers could take over a vehicle by sending a large number of controller area network packets (both normal packets and diagnostic packets) on the CAN bus to internal components. If the malicious packets arrive at the ECUs before the legitimate packets, these components consider them as valid.
Normal packets could be sent by attackers to manipulate multiple components, including the car’s speedometer, odometer, on-board navigation system, steering, brakes, and acceleration.
Attackers could send diagnostic packets to alter the behavior of some of the components in the vehicle such as brakes management, kill engine, lights flashing, doors lock/unlock, and modification of fuel gauge.
Unlike normal packets, diagnostic activities against an ECU need to be authenticated. However, weak implementation of the authentication process poses serious risks to the users.
Threat actors could target modern vehicles for multiple reasons, from sabotage to cyber espionage. An attacker could launch an attack to take over the car and cause a crash or to gather information stored by on board systems that could allow it to spy on the owner.
Car makers should implement security by design for the internal architecture of the vehicles. Here are some of the essential mitigations proposed by security researchers:
- Implement network segmentation to avoid threat actors exploiting security flaws in a component in order to access the rest of the network, including critical units.
- Implement authentication or authorization for any component connected through the CAN bus.
- Encrypt the traffic on the CAN bus.
Security researchers also suggest implementing anomaly detection mechanisms to prevent cyberattacks. An anomaly detection mechanism can leverage patterns for “normal” behavior for any component in the vehicle. Any deviation from this baseline must be analyzed and countermeasures can be potentially activated. Researchers Miller and Valasek suggested real-time analysis of CAN packets over time to detect potentially malicious traffic.
Unfortunately, in many cases, auto manufacturers avoid increasing the complexity of the vehicles by adding additional defense systems.
To mitigate the risk of attacks, experts recommend manually applying software security patches provided by the car makers when the vendors don’t push them ‘over the air’.
Avoid installing any software that is not approved by the car manufacturer and don’t install updates downloaded from third-party repositories. This includes diagnostics software to monitor your car's performance or different types of entertainment software that has Internet connectivity. Third-party software could be affected by vulnerabilities that could be exploited by hackers to steal or take over your car.
Researchers and authorities have to support and urge the automotive industry in implementing mandatory requirements for the safety and security of the vehicles. | <urn:uuid:f9c7d22c-99ce-4865-a921-fbf3781ae806> | CC-MAIN-2022-40 | https://cybernews.com/security/your-new-smart-car-is-an-iot-device-that-can-be-hacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00578.warc.gz | en | 0.935748 | 1,576 | 2.875 | 3 |
All About Rootkits. Definition, Types, Detection, Prevention
In this article, we will discuss the functionality of a rootkit, go through classifications, detection methodologies, and, of course, rootkit prevention.
What is a Rootkit?
Rootkits are malicious computer programs designed to infiltrate a machine for the purpose of obtaining administrator or system-level privileges. Despite their overtly clandestine behavior, rootkits are only intended to bypass user authentication mechanisms before the arrival of a malicious payload (i.e., they often work in tandem with trojans or other types of viruses).
As rootkits come in advance of various infectors, they do possess some degree of autonomy. Most are designed to automatically identify and exploit backdoors or, if none is present, rubber-stamp the installation process of legacy or deprecated software. Of course, there are cases when malicious actors would manually exploit vulnerabilities before dropping a rootkit on the victim’s machine.
Types of rootkits
In this section, we’ll go through kernel rootkits, hardware & software rootkits, Hyper-V, and more.
1. Kernel rootkit
This type of rootkit is designed to function at the level of the operating system itself. What this means is that the rootkit can effectively add new code to the OS, or even delete and replace OS code.
Kernel rootkits are advanced and complex pieces of malware and require advanced technical knowledge to properly create one. If the rootkit has numerous bugs and glitches, then this heavily impacts a computer’s performance.
On a more positive note, a buggy kernel rootkit is easier to detect since it leaves behind a trail of clues and breadcrumbs for an antivirus or anti-rootkit.
2. Hardware or firmware rootkit
Instead of targeting the OS, firmware/hardware rootkits go after the software that runs certain hardware components. In 2008, a European crime ring managed to infect card-readers with a firmware rootkit. This then allowed them to intercept the credit card data and send it overseas.
This proof-of-concept rootkit, for instance, managed to bury itself in the hard drive itself, and then intercept any of the data written on the disk.
3. Hyper-V rootkits
Virtualized rootkits are a new development that takes advantage of new technologies. Security researchers developed the first such rootkit as a proof of concept in 2006 and are even more powerful than a kernel rootkit.
A kernel rootkit will boot up at the same time as the operating system, but a virtualized rootkit will boot up first, create a virtual machine and only then will it boot up the operating system.
To give you a visual sense of this, imagine the rootkit and the boot-up process as if they were two boxes.
- In a kernel rootkit, the first box is the boot-up process. The rootkit is the second box, that goes inside the first box.
- In a virtualized rootkit, the first box is the rootkit itself. The boot-up process is the second box that goes within the first box.
As you can imagine, virtualized rootkits have even more control over your system than a kernel one. And because they bury themselves so deep within the device, removal can be nearly impossible.
4. Bootloader rootkit or bootkit
Since it attaches itself to those boot records, the rootkit won’t show up in the standard file-system view. As a result, antivirus and anti-rootkit software will have a hard time detecting the malware.
To make matters even worse, the rootkit might modify the boot records, and, by removing it, you risk damaging your PC.
5. Memory rootkit
Memory rootkits hide in the RAM memory of your computer. Like kernel rootkits, these can reduce the performance of your RAM memory, by occupying the resources with all the malicious processes involved.
6. User-mode or application rootkit
User-mode rootkits are simpler and easier to detect than kernel or boot record rootkits. This is because they hide within an application itself, and not system-critical files.
In other words, they operate at the level of standard programs such as Paint, Word, PC games and so on. This means a good antivirus or anti-rootkit program will probably find the malware and then remove it.
Post-intrusion rootkit detection & removal is challenging, mostly because of the fact that rootkits have the ability to disrupt antivirus software. More than that, once the rootkit has established a bridgehead, it can be used to whitelist processes associated with malicious software.
The detection and removal processes are heavily influenced by the rootkit’s type. For instance, most software-based rootkits can be detected and subsequently removed using behavioral analysis or mem dump analysis. However, hardware-based rootkits cannot be removed by physically replacing the affected components. The same goes for kernel-level rootkits – although operating on software level, kernel rootkits cannot be removed using the above-mentioned methodology and, in most cases, would entail an OS reinstallation.
Depending on the rootkit type and infiltration method, detection can be done in several ways: mem dumps analysis, integrity checking, difference-based, behavioral-based or employing an alternative (and trusted) medium.
Memory Dumps Analysis
Effective to some degree, force-dumping the virtual memory may help you in detecting most software-based rootkits, including those embedded in Hyper-V. Mem dumps are offline-exclusive but may require access to online, code repositories.
A PKI-based code-signing check can be used to detect boot- and kernel-level rootkits. The approach entails a comparison between a baseline hash output and a hash output computed at any moment in time to establish whether or not any tempering was done to the initial, publisher-signed file.
DA or difference-based analysis involves the use of an API to compare raw data with infected data. Raw data is produced by trusted sources (e.g., system images), while the rootkit-infected data is generated by an API specifically designed for this purpose.
Well-Known Rootkit Examples
Most cybercriminals don’t actually code their own malware. Instead, they just use already existing malicious programs. Most of the time, they only adjust the rootkit’s settings, while some technically skilled add their own code. This is called the malware economy and is worth its own read.
Just like in the real economy, some malware has bigger market shares than others. In this section, we want to cover some of the more widespread rootkit families out there.
If you are unfortunate enough to get infected with a rootkit, chances are it will be one of these.
This rootkit is responsible for the creation of the ZeroAccess botnet, which hogs your resources as it mines for bitcoins or it commits click fraud by spamming you with ads.
At some point, security researchers estimated the ZeroAccess botnet contained 1-2 million PCs. A large part of it (but not all, unfortunately) was taken down by Microsoft as well as other security companies and agencies.
While not as strong a threat as before, Variations of the ZeroAccess rootkit are still out there and actively used.
At one point, the botnet based on the TDSS rootkit was thought to be the second biggest in the world. Following some concerted law enforcement actions, several arrests were made and the botnet entered a period of decline.
The malware code, however, is still out there, and actively used. Unlike the ZeroAccess rootkit, TDSS is after your personal data such as credit card data, online bank accounts, passwords, Social Security numbers, and so on.
The Necurs rootkit protects other types of malware that enslave a PC to the botnet, thus making sure the infection cannot be removed.
Unlike TDSS and ZeroAccess, Necurs is an active botnet, and the cybercriminals behind it are still actively trying to grow it.
How to prevent a rootkit infection
Rootkits may be troublesome and persistent, but in the end, they are just programs like many other types of malware. This means that they only infect your computer after you’ve somehow launched the malicious program that carries the rootkit.
Here are some basic steps you should follow to make sure you don’t get infected with a rootkit, and thus avoid all of these painful and time-consuming steps to remove one.
Be wary of phishing or spear-phishing attempts
Phishing is one of the most frequently used methods to infect people with malware. The malicious hackers simply spam a huge email list with messages designed to trick you into clicking a link or opening an attachment.
The fake message can be anything really, from a Nigerian prince asking for help to retrieve his gold, to really well-crafted ones such as fake messages from Google that request you update your login information.
The attachment can be anything, such as a Word or Excel document, a regular .exe program or an infected JPEG.
Keep your software updated at all times
Outdated software is one of the biggest sources of malware infection. Like any human creation, software programs are imperfect by design, meaning they come with many bugs and vulnerabilities that allow a malicious hacker to exploit them.
For this reason, keeping your software up-to-date at all times is one of the best things you can do to stay safe on the Internet and prevent a malicious hacker from infecting you with malware.
Since updating your software can be such a chore, we recommend you use an automated program to do that for you. To this end, we suggest you use our own Heimdal™ Patch & Asset Management, which we specifically designed to handle this sort of problem.
One major flaw of antivirus is that the malware has to effectively touch your PC before it becomes useful.
Traffic filtering software, on the other hand, scans your inbound and outbound traffic to make sure no malware program is about to come to land on your PC as well as prevent private and confidential information from leaking to any suspicious receivers.
One such program that we wholeheartedly recommend is our own Heimdal™ Threat Prevention, which specializes in detecting malicious traffic and blocking it from reaching your PC.
Rootkits are some of the most complex and persistent types of malware threats out there. We stopped short of saying this, but if not even a BIOS flash is able to remove the rootkit, then you just might have to throw away that PC and just see which hardware components, if any, you can reuse.
Like with anything in life, the best treatment to a rootkit infection is to prevent one from happening.
Heimdal® Threat Prevention - Network
- No need to deploy it on your endpoints;
- Protects any entry point into the organization, including BYODs;
- Stops even hidden threats using AI and your network traffic log;
- Complete DNS, HTTP and HTTPs protection, HIPS and HIDS;
Last edited by Vladimir Unterfingher. | <urn:uuid:0fad66bb-aee5-473a-b7e4-26ce9efc8b12> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/rootkit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00778.warc.gz | en | 0.914964 | 2,387 | 3.515625 | 4 |
When dealing with HIPAA Rules and discussing the role of HIPAA covered entities in securing data, we often encounter the abbreviation PHI, but what does PHI stand for? What data or elements do we refer to when we talk about PHI?
What Does PHI Stand For?
PHI is the abbreviation we use when we talk about Protected Health Information. As mentioned above, it is most often used in connection with HIPAA, which is the acronym for the Health Insurance Portability and Accountability Act.
PHI refers to quite a broad range of information, both digital and printed. We also sometimes speak about ePHI, which is only in relation to electronic PHI. To understand what is included in PHI, we need to break the term down into its parts, in this case “protected” and “health information”.
When we say “protected”, we are referring to data that falls under the umbrella of the HIPAA Privacy and Security Rules. These Rules govern how certain organizations in the healthcare industry – for example healthcare providers, health insurance plans, and healthcare clearinghouses, as well as their business associates – manage the patient data they work with. The Rules require that these organizations put various administrative, physical, and technical safeguards in place to protect the privacy, integrity, and availability of any identifiable data that they deal with.
Moving to the second part of PHI, “health information”, means we are talking about information that is relevant to a patients’ treatment. Such information includes but is not limited to: medical history, diagnoses, test results, prescribed medications, and demographic grouping. Payment information for medical services is also covered by PHI, as is any information that can be used to identify the patient, for example medical record numbers, insurance identifiers, Social Security numbers, or other unique identifiers.
HIPAA covered entities may come into contact with PHI at different stages in the history of the patient. They may be the ones that create new PHI or add new elements; they may receive existing PHI from another source; they may be involved in data storage where PHI is included in the data; or they may transmit or facilitate the transmission of PHI. We also use the term PHI for health information created in the past, being created currently, or for information that will be added to the file in the future. It covers both physical and mental health information.
Something to note is that we do not use PHI as a term to refer to information that is included in education records, nor for details that a HIPAA covered entity may be required to access and record due to its status as an employer.
It is possible to de-identify or anonymize PHI, in which case it no longer contains the sensitive “protected” information and is therefore no longer considered PHI. In order to do this, certain elements must be removed from the data to make it too difficult to reasonably identify the patient or patients that were originally concerned. There are two primary methods to do this: the Expert Determination method and the Safe Harbor method.
To summarize, the Expert Determination method is quite self explanatory: a qualified statistical expert can be consulted and their opinion must be that the risk of re-identifying a patient from the data released is low enough that it is acceptable under HIPAA Privacy Rule requirements. The Safe Harbor method is somewhat different in that it is a more procedural approach: specific identifiers, listed below, must be removed from the data so that any direct references to the patients are redacted. There are 18 such identifiers that are to be removed from PHI in order for it to be considered de-identified and no longer protected. These elements are:
- Geographic data below state level
- All elements of dates, except the year (including admission and discharge dates, dates of birth, dates of death, any ages over 89 years old, and elements of dates (including year) that are indicative of age)
- Telephone numbers
- Fax numbers
- Email addresses
- Social Security numbers
- Medical record numbers
- Health plan beneficiary numbers
- Account numbers
- Certificate/license numbers
- Vehicle identifiers and serial numbers including license plates
- Device identifiers and serial numbers
- Web URLs
- Internet protocol (IP) addresses
- Biometric identifiers (i.e. retinal scan, fingerprints, voice prints)
- Full face photos and similar images
- Any unique identifying number, characteristic or code
In closing, PHI refers to Protected Health Information – which is essentially any information from which a patient can be identified, including payment and other information. All staff of HIPAA covered entities should be appropriately trained to know what constitutes PHI, what they can and can’t do with PHI, and the different safeguards that should be in place to keep this special and privileged information “protected”. Mismanagement or mishandling of PHI can lead to serious repercussions for both organizations and individuals, so it is of the utmost importance that all parties understand exactly what PHI is and why it is protected. | <urn:uuid:40126e68-3deb-4339-a8c9-7bfcc025543e> | CC-MAIN-2022-40 | https://www.compliancejunction.com/what-does-phi-stand-for/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00778.warc.gz | en | 0.945003 | 1,029 | 3.203125 | 3 |
Digitally connected world is generating, collecting and analyzing large volumes of data. All the data which is getting generated is not fully collected and all the collected data is not fully analyzed. There are lot of constraints around data privacy and security which needs to be addressed if all the data which we collect is made available for analysis as well. For example, GDPR applies multiple restrictions in capturing, storing, processing, sharing and disposing individual data. The other challenge is that the data collection and analysis happens in siloes in many cases. An ecosystem of an organization would consist of the many entities like employees, customers, partners, suppliers, regulators, and society at large. However, when it comes to data, all these are viewed in isolation and this leads to incomplete data collection, partial analysis and uncertain actionable insights. If these entities can share data without compromising the privacy and security concerns of the participants, it would vastly enhance the quality of analysis and insights. For every step an organization takes to make data available for third parties, it takes two steps backward when there are data breaches (e.g.; CapitalOne, Equifax, Cambridge Analytica etc.). A lot of effort is being spent in securing the data than sharing. With rapid advances in cloud technology, organizations are increasingly adopting cloud for data storage and computing. Data security and privacy is a key area of concern which inhibits the journey towards a world of “open data”. It is important to know how different stakeholders will be benefited or impacted by sharing of data. Following diagram provides an illustrative view of the same.
Privacy Enhancing Techniques (PETs): Democratization of data, machine learning models and insights across ecosystem participants will help to create substantial value among all. A fine balance must be kept between sharing of data across participants and at the same time ensuring compliance to data security and privacy laws. Privacy Enhancing Techniques is a breakthrough approach towards this. Though these techniques have been in existence for some time, it has not been applied at the scale at which is required in a “data hungry” machine learning world. Using PETs data can be shared in a secure and trustworthy manner. Models can be built and deployed without disclosing the actual data. Following are some of the widely used PETs
Differential Privacy: It is a method for publicly sharing information about a dataset by describing the patterns within the dataset, while withholding information about individuals in the dataset. E.g.: In a study of cancer patients, information about a patient’s specific condition will be protected. There is no additional information which would be disclosed apart from what has been generally available about the patient. The inference from the study would not materially change when the patient is removed from the study.
Federated learning: Typical approach is to have all the data being made available from different sources into a centralized data lake or warehouse where machine learning models are built. The data moves from different sources across systems and environments. In federated learning, data resides where it was generated. The models are individually trained at the source of data generation and model is sent to the centralized server. The difference is that instead of data moving from one location to another, the model moves in federated learning. Google pioneered this and has lot of potential when sharing data across networks is risky and an area of concern. E.g.: AI in radiology is an emerging trend. In scenarios where X-ray, CT and MRI studies are done for patients, hospitals and diagnostic companies are constrained to share such data to central servers for analysis. In such instances the ML models are trained on on-premise servers and the trained model is sent to central model server. The enriched model is then sent to another hospital where it is further trained on new set of data and this helps in continuously maturing the model. While the model is being shared, the data resides within the hospital thereby the privacy of patient is secured.
Homomorphic Encryption: Homomorphic encryption is the transformation of data into ciphertext while preserving relationships between elements. Data can be processed, analyzed and insights can be gleaned like how it can be done on the original data. Homomorphic encryptions allow mathematical operations like addition, multiplication, and polynomial transformations which can be performed on original data and it can be encrypted. By applying this technique, we can securely store data on the cloud and make use of the out of box analytics capabilities the platforms offer.
Zero Knowledge Proof: It is a way of encryption by which the actual data of the subject is not revealed as is. Instead a response is provided which either validates or invalidates a query from a third party. This has wider application in securing personally identifiable information (PII) in health care, financial services, telecom etc. It also has wider application in securing blockchain transactions. E.g.: If a third party would like to check if the salary exceeds $100,000 for processing a loan, the KYC provider only responds with a Yes or No, without disclosing the actual salary of the customer.
Secure multiparty computation: Secure multiparty computation (MPC) is a method which enables the safe sharing of data between multiple parties. None of the participants would need to reveal their data in this approach. MPC can also facilitate private multi-party data analysis and machine learning. in this case, different parties send encrypted data to each other and they can train a machine learning model on the consolidated data, without disclosing the actual data. This helps to overcome the need of a centralized data aggregator.
Libraries and frameworks for secure machine learning: While we secure data, it is also equally important that we secure the machine learning models as well. Models can be subject to different types of attacks like member inference, model inversion, model parameter extraction. PySyft, TF Encrypted, TF Privacy are some of the frameworks which help in securing models. PySyft extends PyTorch, Tensorflow and Keras with capabilities for remote execution, differential privacy, homomorphic encryption, secure multi-party computation and federated learning. TF Encrypted is a framework for encrypted machine learning in TensorFlow. TF Encrypted integrates secure multi-party computation and homomorphic encryption. TF Encrypted also offers a high-level API, TF Encrypted Keras. TF Encrypted Keras aims to make privacy-preserving and secure machine learning possible, without requiring expertise in building customized encryption algorithms. TF Privacy is an important library which applies differential privacy concepts to ensure models don’t memorize specific training data and are more generic in nature.
By adopting the principles of privacy by design (i.e. ensuring privacy in design and throughout the life cycle of the system) and secure machine learning, it helps to bring in trustworthiness in AI solutions. Since data is transformed or hashed, it makes it easier to make data public for audit trail and explainability of models. It also creates opportunities for democratizing data which leads to larger participation among stakeholders.
As more organizations start adopting cloud technologies for data storage, data analysis and proceed to embed AI models in their business process, it is important to ensure that concerns around data and model security are taken care of. With many AI applications moving from proof of concept stage to production, the risks around data and model security are very high. Data scientists and data engineers should ensure that security and privacy are integral features in model development, and it can never be an add on. This must be key consideration when AI application architecture reviews and testing are undertaken. Though there are some limitations around model training time, inference time and performance metrics in secure machine learning, the benefits of ensuring a secure environment far outweigh the drawbacks. While many feel regulations like GDPR as a bane for innovation, there is a flip side to that. Regulations provide clarity on the art of the possible so that innovators are very clear on the boundaries they need to operate in. To be successful in the AI journey, it is important for organizations to be more responsible with customer data and walk that extra mile to gain the confidence and trust of its customers. For more details on secure and collaborative machine learning, please contact email@example.com | <urn:uuid:47f2c390-31c7-40f6-a258-7a231d9b350b> | CC-MAIN-2022-40 | https://www.hcltech.com/blogs/enabling-secure-and-collaborative-machine-learning-through-privacy-enhancing-techniques | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00778.warc.gz | en | 0.932789 | 1,659 | 2.90625 | 3 |
What is Ransomware? [Video]
With new hybrid work conditions, more businesses have experienced the potentially devastating consequences of ransomware. But what is it exactly and how can you prevent it from affecting your business? Learn more by watching this video.
In this video, we’ll cover:
1. What is Ransomware? 0:20
Ransomware is a classification of malware with an end goal for the hackers of demanding a ransom for the damage they have done.
2. What are the Points of a Ransomware Attack? 0:52
There are different points or stages of a ransomware attack. They are:
Get Access to the Network 0:56
Decide When They Attack 1:18
Carry Out the Attack 1:24
Notify You & Request for Ransom 2:15
What are the Best Ways to Avoid a Ransomware Attack? 3:10
The best way to avoid a ransomware attack is to have a good IT company and strong security measures to help minimize that risk for you and your business.
Read our article to learn about the ways to prevent ransomware attacks.
Want to find out where your current cybersecurity measures stand? Fill out this form for a free IT security assessment. | <urn:uuid:8b228ba4-72e5-4baf-83cd-c0945dc48110> | CC-MAIN-2022-40 | https://www.itsasap.com/blog/what-is-ransomware-video | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00778.warc.gz | en | 0.913793 | 256 | 2.515625 | 3 |
Are you a physician? Do you work in a private practice or a clinic setting? Are you a healthcare facility or a business that works with the healthcare industry?
If you collect, store, share, and/or use patient information, you must follow HIPAA rules. If you’re unsure about how to become HIPAA compliant, continue reading. This article will explain HIPAA and compliance strategies.
You will also learn about making sure your remote workers meet HIPAA standards.
What Is the HIPAA Law?
Congress passed The Health Insurance Portability and Accountability Act (HIPAA) in 1996. The U.S. Department of Health and Human Services (HHS) also passed the Privacy Rules to implement the Act. HIPAA ensures the following:
- Transfer and continuous health insurance coverage when someone changes or loses their job
- Decreases health care abuse and fraud
- Established standards for the handling of all health-related electronic billing and processing
- Mandates protection and confidential processing of all protected health information
HHS and the Office of Civil Rights (OCR) implement and enforce the HIPAA Privacy Rule. They have the authority to impose civil money penalties for violations.
What Is PHI?
Protected health information (PHI) is the central focus of HIPAA and the Privacy Rule. All healthcare providers and facilities must ensure safe and confidential handling of PHI. This rule includes all third parties and business associates working with the facility or provider.
What Is a Covered Entity or Business Associate?
The HIPAA Privacy rule strictly defines covered entities and business associates (CE/BA). These are organizations that interact and share PHI with your facility.
How to Become HIPAA Compliant
Today, it’s not enough to be HIPAA compliant. By law, you must be ready to show how you meet HIPAA compliance requirements. The following is a guide to ensure your readiness.
Search for possible PHI and electronic (ePHI) vulnerabilities and risk-mitigation strategies. Select an individual to develop and implement policies and procedures. Ensure all CE/BAs are HIPAA compliant before granting access to PHI or ePHI.
Educate and document that all employees who handle PHI have completed HIPAA training. Track staff compliance with established policies and procedures.
Set up physical safeguards including restricted access to areas that contain PHI. Maintain a record of who accesses PHI. Develop policies related to the transfer, disposal, and re-use of any electronic media.
Place barriers to limit visual and auditory access to PHI by unauthorized individuals.
Use access controls to PHI data via passwords or other secure methods. Protect the PHI from unauthorized changes and data breaches during electronic transmission. Develop procedures for the proper way to destroy PHI when appropriate.
CE/BAs must adhere to HIPAA Privacy Rules as well. Compliance assessments must occur on a routine basis.
What Is a HIPAA Security Breach?
HIPAA § 164.402 defines a breach as any acquisition, access, use, or disclosure of PHI. This rule excludes the following situations.
The first is an unintentional breach by staff members of authorized (CE/BA). It’s considered unintentional if the acquisition, access, or use was made in good faith.
The action must have occurred within their scope of authority. Last, it must not have led to further use or disclosure.
Inadvertent disclosure involves the access and sharing of PHI between authorized persons at a CE/BA. The PHI must not have been used or disclosed in any further manner.
Last, the CE/BA must, in good faith, believe the unauthorized person who received the PHI won’t keep or use it.
Steps to Take If a Breach Occurs
If someone in a facility suspects a possible breach, an investigation must take place. They must determine if the breach meets HIPAA’s “low probability of compromise” threshold. Facilities should assume a breach if they suspect a compromised PHI privacy and security.
The “date of discovery” describes the date that a CE/BA or facility finds a credible breach. If the breach involved over 500 people, notify a prominent media outlet in the affected area. Also, notify the HHS. All involved individuals must also receive a notification.
In cases involving less than 500 people, the facility/CE/BA can keep a log of relevant data. They must notify HHS within 60 days after the end of the calendar year.
How Do You Prevent Security Breaches?
HIPAA only mandates initiation of the breach notification process for unsecured PHI. Thus, let’s turn our attention to breach prevention.
Develop policy and procedure manuals for the following:
- Disaster recovery
- Patient privacy
Also, include manuals covering provider, employee, patient, and CE/BA procedures.
Complete a risk assessment of physical, technical, and personnel vulnerabilities. How many people and devices have access to PHI? What natural disasters occur in your area that might compromise security?
All employees must complete privacy training on a routine basis. How will you structure this training? How will you document staff participation?
Steps to Take to Ensure HIPAA Compliance with Remote Workers
Today’s workplace has changed dramatically. Many individuals now work from home or other remote locations. They may travel for their work activities as well.
This can make HIPAA compliance more challenging. The following suggestions can help ensure the protection and compliance of remote workers.
- Ensure home wireless routers have encryption capability
- Change wireless router passwords on a set schedule
- Ensure all personal devices with access to PHI are encrypted and password protected
- Don’t allow access to the facility network until devices are configured, have firewalls, and antivirus protection
- Encrypt PHI before transmission
- Mandate that all employees use a VPN when remotely accessing the company network
- Provide all employees with a HIPAA-compliant shredder
- Provide lockable file cabinets or safes to store hardcopy PHI
Develop policies and procedures outlining expectations for employee work protocols. Prior to beginning remote work, have employees sign a Confidentiality Agreement. If you allow employees to use their own device, establish a Bring Your Own Device Agreement.
Behavioral protocols may include the following.
- Never allow anyone else to use your device that contains PHI
- Mandate adherence to media sanitization policies
- Mandate that employees disconnect from the company network when they stop working.
- Set up IT configured timeouts that disconnect the employee from the network
Review and document all remote access activity.
Are You a Healthcare Provider or in Charge of a Health Facility?
All businesses that collect, store, process, and share PHI must maintain HIPAA compliance. This article described how to become HIPAA compliant. It also addressed special considerations for remote workers
HIPAA Security Suite provides solutions to assist healthcare organizations and CE/Bas meet HIPAA regulations. We also help you ensure ongoing compliance. Contact us today to ask questions and learn more about our services. | <urn:uuid:aaf0a703-294f-43e5-b366-1fea693a8e2c> | CC-MAIN-2022-40 | https://acentec.com/2020/04/17/hipaa-compliance-guide-a-guide-on-how-to-be-hipaa-compliant-when-working-remotely/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00778.warc.gz | en | 0.912912 | 1,447 | 2.546875 | 3 |
This post is the fourth in a series that describes hunting, diagnosing, and best practices to security using Python. We recommend reading the first three parts before continuing. Part 1 | Part 2 | Part 3
Working in the ICS information security space affords the opportunity to visit some cool critical infrastructure sites. From massive refineries that turn raw crude into diesel and gas to wind farms that harness the power of nature to generate electricity, it is easy to get caught in awe of the process itself and ignore some of the lower-level characteristics of control systems. This week, we will look at a protocol commonly overlooked by many but crucial to control system operation. The Tabular Data Stream (TDS) protocol is a core component of many data historians and plays a significant role in industrial networks. We will define TDS, where TDS typically lives, and how to hunt in TDS data.
What is Tabular Data Stream?
The Tabular Data Stream (TDS) protocol is an application layer protocol initially invented by a former enterprise software and services company, Sybase, in the mid-1980s. In 1986, Microsoft entered an agreement with Sybase to license Sybase’s DataServer application. The agreement between Sybase and Microsoft led to the first version of Microsoft SQL Server. In 1993, Microsoft ported SQL Server to Windows NT, adding the ability to use native management and networking features built into Windows. Microsoft hails the movement of SQL Server to Windows NT from the legacy IBM OS/2 environment as a considerable success. In 1994, Microsoft and Sybase parted ways.
TDS continues to play a critical role in Microsoft SQL server today. Structured Query Language (SQL) queries and responses travel via TDS to and from Microsoft SQL servers. Additionally, Microsoft proves the ability to call user-defined functions, also referred to as remote procedure calls, using TDS. As with SMB, Microsoft has a great reference page for TDS and how the protocol works here. Simply put, TDS is the database language of systems that rely on Microsoft SQL Server.
Where is TDS Used on Industrial Networks?
When we see TDS on industrial networks, generally the traffic is related to the operational historian. GE’s Proficy/iFix historian, Honeywell’s Uniformance Process History Database, and Schneider Electric’s Wonderware historian all rely on Microsoft SQL Server to some degree on the back end. It is worth pointing out that not everyone in the industry relies on Microsoft SQL server or other popular database back ends. OSIsoft has its proprietary database format, PI Archive, that does have an interface for communicating within the SQL standard.
We often see TDS traffic on industrial networks between hosts that interact with the historian. Homegrown scripts that scrape metrics from historians we have seen in production environments also commonly use SQL APIs to query data. We have also widely seen TDS traffic between hosts that are expected to, and regularly interact, with the historian. Some historian manufacturers have written their own communications protocols for messages, but others still rely on TDS for SQL Server features.
Hunting in TDS
To dive into TDS traffic, we used a Python module named pyshark. Pyshark is a wrapper for tshark and allows Python access to live network traffic or packet capture files. We imported pyshark and used the FileCapture method to read the packet capture file. We provided a display filter into the FileCapture method to filter the traffic to TDS only. If you are sniffing live network traffic, you can also use a Berkeley Capture Filter (bpf) in the file capture method.
We then read the TDS data from the packet capture and stored the data in a variable named cap_data. We only looked at the observed SQL queries and remote procedure calls. We stored the codes for the different TDS traffic types and a few of the known remote procedures for easy conversion.
We then used a Pandas data frame to display the initial data. This table shows both RPC and query data. We split out the two different types of records and dove deeper into the dataset.
We then looked at just the SQL queries made. The output below displays all observed SQL queries and counts the number of times we see each.
As shown above, we observed two queries. The packet capture we used in this example came from Wireshark’s sample packet capture page. In industrial environments, we typically only see select statements. If you are hunting through TDS data on an industrial network, you should just see SQL statements related to the expected behavior of the examined device. If DROP TABLE, or CREATE USER statements appear, you should look deeper, as those SQL commands are likely malicious. A DROP TABLE command could be used to clear all the database out of the historian if an attacker wanted to wipe data as part of the action stage of the ICS kill chain, while a CREATE USER statement might be used by an attacker to add another user. There are many other techniques an attacker could use against SQL server. Knowing your environment and what hosts should be performing which SQL statement behaviors is key to identifying attacks against a historian.
We then pivoted to remote procedure calls. As previously mentioned, TDS allows users to call functions via RPC. One of the classic remote code exploitation vulnerabilities from years ago, known as MS 03-026 or RPC DCOM, exploited a buffer overflow in an RPC function. This vulnerability did not directly relate to TDS, but it is worth mentioning that functions that can be called remotely via RPC have been exploited in the past to gain access to Windows machines.
Next, we examined how to look at the functions being called via TDS. This allows you to gain visibility into what is typical for your network and also check for function calls following any future vulnerability announcements.
We first looked at the nine unique procedure names our dataset contains. Knowing what is malicious or benign will require you to know about your environment and what might be talking to the historian or SQL server. If you use baselining in your environment, called functions are a good candidate to add to your baseline.
Next, we looked at stored procedures. These procedures are a part of the TDS specification. As with the named procedures, you should consider the behavioral context of the devices querying the SQL server.
If you do find something suspicious, you can drill down into the data and display information that might assist your investigation. The first cell displays a call to the p_getBogusData function. We can see the time the function was called, the source host, and the destination host that received the function call. The second box shows calls to either p_SetBogusSample or sp_execute sq. You can sort and filter any data using the features built into Pandas.
Wrapping Everything Up
Microsoft SQL Server is a workhorse protocol for many data historians on industrial networks. TDS is the language Microsoft SQL Server speaks. Understanding the queries and remote procedure calls on your network and monitoring for malicious queries and procedure calls are two activities you should perform at a minimum for TDS traffic present in your network environment. We used Python, Jupyter Notebooks, and Pandas for our analysis, but you can use these techniques with your hunting platform of choice.
- Microsoft TDS Protocol Reference: https://msdn.microsoft.com/en-us/library/dd304523.aspx
- Background on MS SQL Server: https://news.microsoft.com/…sql-server-climbs-to-new-heights/
- MS 03-026 Rapid 7 Reference Page: https://www.rapid7.com/…/ms03_026_dcom
Ready to put your insights into action?
Take the next steps and contact our team today. | <urn:uuid:eeb35a3c-46ec-4b9e-8662-0235cbfa2e80> | CC-MAIN-2022-40 | https://www.dragos.com/blog/industry-news/threat-hunting-with-python-part-4-examining-microsoft-sql-based-historian-traffic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00778.warc.gz | en | 0.912239 | 1,619 | 2.765625 | 3 |
Data lakes and data warehouses are two of the most popular forms of data storage and processing platforms, both of which can be employed to improve a business’s use of information.
However, these tools are designed to accomplish different tasks, so their functions are not exactly the same. We’ll go over those differences here, so you have a clear idea of what each one entails and choose which would suit your business needs.
What is a data lake?
A data lake is a storage repository that holds vast raw data in its native format until it is needed. It uses a flat architecture to store data, which makes it easier and faster to query data.
Data lakes are usually used for storing big datasets. They’re ideal for large files and great at integrating diverse datasets from different sources because they have no schema or structure to bind them together.
How does a data lake work?
A data lake is a central repository where all types of data can be stored in their native format. Any application or analysis can then access the data without the need for transformation.
The data in a data lake can be from multiple sources and structured, semi-structured, or unstructured. This makes data lakes very flexible, as they can accommodate any data. In addition, data lakes are scalable, so they can grow as a company’s needs change. And because data lakes store files in their original formats, there’s no need to worry about conversions when accessing that information.
Moreover, most companies using a data lake have found they can use more sophisticated tools and processing techniques on their data than traditional databases. A data lake makes accessing enterprise information easier by enabling the storage of less frequently accessed information close to where it will be accessed. It also eliminates the need to perform additional steps to prepare the data before analyzing it. This adds up to much faster query response times and better analytical performance.
What is a data warehouse?
A data warehouse is designed to store structured data that has been processed, cleansed, integrated, and transformed into a consistent format that supports historical reporting and analysis. It is a database used for reporting and data analysis and acts as a central repository of integrated data from one or more disparate sources that can be accessed by multiple users.
A data warehouse typically contains historical data that can be used to generate reports and analyze trends over time and is usually built with large amounts of data taken from various sources. The goal is to give decision-makers an at-a-glance view of the company’s overall performance.
How does a data warehouse work?
A data warehouse is a system that stores and analyzes data from multiple sources. It helps organizations make better decisions by providing a centralized view of their data. Data warehouses are typically used for reporting, analysis, predictive modeling, and machine learning.
To build a data warehouse, data must first be extracted and transformed from an organization’s various sources. Then, the data must be loaded into the database in a structured format. Finally, an ETL tool (extract, transform, load) will be needed to put all the pieces together and prepare them for use in analytics tools. Once it’s ready, a software program runs reports or analyses on this data.
Data warehouses may also include dashboards, which are interactive displays with graphical representations of information collected over time. These displays give people working in the company real-time insights into business operations, so they can take action quickly when necessary.
Also read: Top Big Data Storage Products
Differences between data lake and data warehouse
When storing big data, data lakes and data warehouses have different features. Data warehouses store traditional transactional databases and store data in one table with structured columns. Comparatively, a data lake is used for big data analytics. It stores raw unstructured data that can be analyzed later for insights.
|Parameters||Data lake||Data warehouse|
|Data type||Unstructured data||Processed data|
|Storage||Data are stored in their raw form regardless of the source||Data is analyzed and transformed|
|Purpose||Big data analytics||Structured data analysis|
|Target user group||Data scientist||Business or data analysts|
|Size||Stores all data||Only structured data|
Data type: Unstructured data vs. processed data
The main difference between the two is that in a data lake, the data is not processed before it is stored, while in a data warehouse it is. A data lake is a place to store all structured and unstructured data, and a data warehouse is a place to store only structured data. This means that a data lake can be used for big data analytics and machine learning, while a data warehouse can only be used for more limited data analysis and reporting.
Storage: Stored raw vs. clean and transformed
The data storage method is another important difference between a data lake and a data warehouse. A data lake stores raw information to make it easier to search through or analyze. On the other hand, a data warehouse stores clean, processed information, making it easier to find what is needed and make changes as necessary. Some companies use a hybrid approach, in which they have a data lake and an analytical database that complement each other.
Purpose: Undetermined vs. determined
The purposes of a data lake’s data are undetermined. Businesses can use the data for any purpose, whereas data warehouse data is already determined and in use. Hence why data lakes have more flexible data structures compared to data warehouses.
Where data lakes are flexible, data warehouses have more structured data. In a warehouse, data is pre-structured to fit a specific purpose. The nature of these structures depends on business operations. Moreover, a warehouse may contain structured data from an existing application, such as an enterprise resource planning (ERP) system, or it may be structured by hand based on user needs.
Database schema: Schema-on-read vs schema-on-write
A data warehouse follows a schema-on-write approach, whereas a data lake follows a schema-on-read approach. In the schema-on-write model, tables are created ahead of time to store data. If how the table is organized has to be changed or if columns need to be added later on, it’s difficult because all of the queries using that table will need to be updated.
On the other hand, schema changes are expensive and take a lot of time to complete. The schema-on-read model of a data lake allows a database to store any information in any column it wants. New data types can be addcolumns, and existing columns can be changed at any time without affecting the running systemed as new . However, if specific rows need to be found quickly, this could become more difficult than schema-on-write systems.
Users: Data scientist vs. business or data analysts
A data warehouse is designed to answer specific business questions, whereas a data lake is designed to be a storage repository for all of an organization’s data with no particular purpose. In a data warehouse, business users or analysts can interact with the data in a way that helps them find the answers they need to gain valuable insight into their operation.
On the other hand, there are no restrictions on how information can be used in a data lake because it is not intended to serve one single use case. Users must take responsibility for curating the data themselves before any analysis takes place and ensuring it’s of good quality before storing it in this format.
Size: All data up to petabytes of space vs. only structured data
The size difference is due to the data warehouse storing only structured data instead of all data. The two types of storage differ in many ways, but they are the most prevalent. The first way they differ is in their purpose: Data lakes store all data, while warehouses store only structured data.
Awareness of what type of storage is needed can help determine if a company should start with a data lake or a warehouse. A company may start with an enterprise-wide information hub for raw data and then use a more focused solution for datasets that have undergone additional processing steps.
Data lake vs. data warehouse: Which is right for me?
A data lake is a centralized repository that allows companies to store all of its structured and unstructured data at any scale, whereas a data warehouse is a relational database designed for query and analysis.
Determining which is the most suitable will depend on a company’s needs. If large amounts of data needs to be stored quickly, then a data lake is the way. However, a data warehouse is more appropriate if there is a need for analytics or insights into specific application data.
A successful strategy will likely involve implementing both models. A data lake can be used for storing big volumes of unstructured and high-volume data while a data warehouse can be used to analyze specific structured data. | <urn:uuid:59623631-3e15-4dce-9571-df799bb6dedc> | CC-MAIN-2022-40 | https://www.itbusinessedge.com/business-intelligence/data-lake-vs-data-warehouse/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00778.warc.gz | en | 0.915742 | 1,861 | 2.71875 | 3 |
At NVIDIA’s GPU Technology Conference (GTC) 2016 in San Jose, California the company announced products based on their latest GPU architecture, code-named Pascal. This conference is traditionally attended by some of the leading researchers in GPU-accelerated compute technologies and over the past few years has become increasingly focused on Deep Neural Networks (DNN). DNNs are the latest key to artificial intelligence (AI) and cognitive computing. Incredible strides have been made over the last three years in AI thanks to Graphics Processing Units (GPUs). Companies like Google, Microsoft, IBM, Toyota, Baidu and others are looking at deep neural networks to help solve many of their complex analytical and data-rich problems. NVIDIA is helping these companies to harness the power of their GPUs to accelerate the deep learning these systems need to do. Thanks to NVIDIA’s early involvement in deep neural networks research and their latest GPU hardware, the company is in the driver’s seat right now when it comes to delivering silicon to accelerate deep neural networks.
Photo credit: Patrick Moorhead
The GP100 is for Deep Neural Networks
The newly announced GPU, named GP100 is the first of the Pascal family of GPUs from NVIDIA running on the 16nm FinFET process from TSMC and uses the company’s latest GPU architecture. The GP100 is designed first and foremost for the datacenter in an NVIDIA Tesla Compute card format which is for DNN, cloud, enterprise and other HPC purposes. I expect the GP100 will eventually find its way into the consumer market as a gaming card with many changes, but its primary purpose is to serve as an enterprise acceleration processor. Because of Pascal’s performance, power and software capabilities it will really start to challenge CPU-driven DNN. It also utilizes NVIDIA’s latest CUDA 8 programming language which has become the de-facto standard in GPU computing since it started nearly a decade ago.
Significant compute cluster performance increase via brute force
As has been made quite clear with IBM, Google and Baidu’s adoption of GPUs for DNN workloads, GPUs are currently a better choice versus FPGAs in training. FPGAs may still have a role, but they are likely more useful in production. The GP100 GPU itself is a 15.3 billion transistor chip built on TSMC’s 16nm FinFET process, NVIDIA is able to cram these 15.3 billion transistors on a 610mm^2 chip which is actually larger than the previous generation even though the previous generation was a 28nm chip. Pascal is effectively a full node shrink from the previous generation Maxwell which fit only 8 billion transistors into 601mm^2 effectively the same amount of space. Pascal also increases the amount of FP32 CUDA shader cores from 3072 to 3584 which is a pretty sizable increase and helps deliver 10 TFLOPS of performance.
The real important increase for HPC and datacenter comes in the FP64 CUDA cores which increase from 96 in Maxwell to 1792 in GP100. This increases the double precision capabilities of the Pascal GP100 from 213 GFLOPS to 5.3 TFLOPS, an absolutely massive increase. Maxwell itself was not very favored by those that needed double precision, so many stuck with Kepler generation Tesla cards if they needed double precision. That will change with the GP100 and Pascal architecture.
5 “miracles” to productize the NVIDIA P100 (credit: Patrick Moorhead)
Memory bandwidth, power enhancements via HBM2
The GP100 also uses High Bandwidth Memory 2 (HBM2) which is a new memory technology pioneered in GPUs first by AMD with their Fiji family of graphics cards with HBM. HBM2 brings additional bandwidth and capacity increases so that cards based on the GP100 can have 16GB of memory compared to Fiji which can only have 4GB per GPU. This new memory technology also gets stacked on-die with the GPU which saves significant power and space allowing GP100-based graphics cards to be significantly smaller and more power efficient. The P100 Tesla card with the GP100 GPU inside has 16GB of HBM2 which operates at a mind boggling 720 GB/s effectively removing memory from being the bottleneck in this GPU while also natively supporting ECC.
Scalability improvements via NVLink
NVIDIA didn’t stop with just a new architecture, 16nm FinFET and HBM2, they also introduced NVLink into their first GPU. NVLink is designed to help NVIDIA GPUs interface with one another at a much higher bandwidth and lower latency than PCIe 3 and to connect directly into IBM Power8+ and newer CPUs which also feature NVLink.
HPC enterprise datacenter leaders on-board
All of the HPC OEM leaders like Dell, IBM, HPE and Cray are all on board to implement the P100 Tesla card with the GP100 inside. There will be no shortage of demand for these cards inside the enterprise, it will be more important to see if they can successfully fill that demand.
OEM partners for P100 (Credit, Patrick Moorhead)
It’s very important to understand that this isn’t about the beginning of a kick the tires stage. We are beyond that and into deployment.
DGX-1 is the “rabbit” with supercomputer performance
It will take time for the OEMs to get their systems ready. To accelerate the speed of Pascal’s implementation in universities, enterprises, and cloud service providers, NVIDIA also announced a P100-based server appliance called the DGX-1.
The DGX-1 is a fully integrated solution that includes two Xeon processors and 7TB of SSD space as well as eight P100 Tesla cards in order to deliver the most performance per watt. This appliance is not intended to replace OEM solutions, but rather to allow people that want to start working on their DNNs using Pascal to do so sooner rather than later.
There’s a very good chance that many of NVIDIA’s customers for Pascal may end up designing their own solutions and NVIDIA is simply enabling early adopters to buy a DGX-1 to get ready for when OEM solutions are available at large scale. NVIDIA is selling one DGX-1 for $129,000 and will be delivering them this summer.
NVIDIA claims that a single DGX-1 appliance will replace 250 CPU-based nodes that would normally be used for DNN. In addition to replacing 250 CPU nodes, the company claims Pascal is 12X faster than the previous generation GPU in DNN. The DGX-1 delivers all of this performance in a compact 3U, 3200-watt server. This could amount to huge savings in space as well as overall cost for anyone looking to do serious DNN training.
NVIDIA may have stumbled upon GPU accelerated DNN either by sheer luck, accident or the result of their close relationship And investment with the research community. Ultimately, it doesn’t matter what the answer is to the question because NVIDIA is clearly the leader in this space right now and it is proving to be a major driver of their technology focus. NVIDIA needs flawless execution on the GP100 for DNN and to deliver these GPUs and their software on time, if not early. CPUs currently own this space, but GPUs are extremely popular now but FPGAs want a piece of this, too. NVIDIA is in the driver’s seat right now but they cannot rest on their laurels and allow others to catch up to them. NVIDIA has a pretty well spread delivery roadmap leading all the way up to Q1 2017, so it will be critical which major design wins they get until then. | <urn:uuid:e412494a-ec7e-4fd7-a100-b0bdbdc31824> | CC-MAIN-2022-40 | https://moorinsightsstrategy.com/nvidia-extends-their-datacenter-performance-lead-in-neural-network-computing-at-gtc16/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00778.warc.gz | en | 0.942864 | 1,590 | 2.609375 | 3 |
As robots in the world around us become more sophisticated, relationships between people and autonomous systems are growing in importance. Now a new £3m UK government-backed research project is investigating just how important that is – and how to help build trust between humans and machines.
The Covid-19 pandemic has accelerated the deployment of robots, with a record number now operating in industrial settings around the world. This week the US Airforce announced semi-autonomous ‘robot dogs’ are carrying out security patrols at Tyndall Air Force Base in Florida, and in Japan a robot is ensuring customers in a shop wear facemasks and observe social distancing.
With these machines now operating in relatively uncontrolled environments, rather than sitting static in a factory or a lab, ensuring humans feel comfortable with them will be key to further adoption.
“The reason we need to talk about trust now is because we’re basing these new robots on artificial intelligence and deep neural networks,” says Elisa Roth a doctoral researcher in the cyber-human lab at the University of Cambridge engineering department.
“Their algorithms have layers which we can’t explain, so the machine might do something and we won’t know why. So you have to be able to trust them.
“The other thing which makes robots special is the physical component, which means they can actually harm people, unlike other AI decision-making systems that just run on a computer.”
Building trustworthy autonomous systems
The new research project, the Trustworthy Autonomous Systems (TAS) programme will take a close look at how humans and autonomous systems interact, and explore solutions to manage trust in autonomous systems.
Funded by UK Research and Innovation, it will cover scenarios that require interaction with humans such as self-driving cars, autonomous wheelchairs or ‘cobots’ in the workforce.
“Robots tend to be built for specific tasks, and if people over-trust them and try to get them to do things outside their scope, trust and confidence falls away, and they don’t get used effectively,” says Professor Helen Hastie, from Heriot-Watt University and the Edinburgh Centre of Robotics, who is leading the TAS trust node.
“If people don’t trust robots enough for whatever reason, then they won’t get adopted and be able to do the key tasks they could perform such as keeping humans out of harm’s way.”
Prof Hastie and her team, along with colleagues from Imperial College London and the University of Manchester, plan to build on theories about how humans trust and apply those to the field of robotics.
She says: “As robots are able to do more it is important not to distrust them in new scenarios, new situations and with new users. That is why transparency is really important so that users understand clearly what the robots can and can’t do.”
Cobots and automated warehouses
The global automation market, including robots and other autonomous systems such as drones and artificial intelligence, was worth $186bn in 2019, and is expected to grow to $214.3bn in 2021.
Companies contributing to this growth include Tharsus, an OEM developing robotic systems, which the company refers to as ‘strategic machines’, for clients including Ocado, with which it worked on the online grocery company’s automated warehouse, The Hive, which is staffed by a swarm of robots.
Brian Palmer, Tharsus CEO, is expecting demand to accelerate in the near future, and says: “We’re on the cusp at the moment, and in three to five years’ time the landscape will look very different.
“Things like cobots haven’t really landed yet. There are a few operating but they’re only working with small payloads in order to keep them safe.”
For the uninitiated, a cobot is a machine that shares a working environment with humans. At the moment these tend to take the form of staff working alongside the robot on separate tasks, but could in future involve closer collaboration.
Palmer continues: “If you go into a car factory, the robots are still where they were when I started 30 years ago – in cages in the body shop or the paint shop.
“With AI we’re going to see robots that can work alongside people and react when [workers] do something unexpected. That situational awareness will be important [for building trust].”
The Tharsus CEO believes staff will be more open to trusting robots which can contribute to new processes within organisations, rather than replicating existing tasks.
“Will people accept robots? Possibly not if all that they’re doing is what a person previously did for them, because the robot probably won’t do it as well,” he says.
“But as an enabler of a new service proposition, providing something that couldn’t be done by a person in a system of work that makes sense, I think we can have huge success.
“Ocado is a great example of a company that has built an entire new system of work, from the app where people place their orders, right through to the device used by drivers for routing and deliveries. The robots are integral to that.”
Indeed, Walmart recently announced it was scrapping plans for robot shelf scanners to keep track of inventory after a six-month trial revealed humans could do the job more effectively.
Driving up trust in autonomous vehicles
Autonomous vehicles are likely to present a new service proposition for mobility, but will rely on a high degree of trust from passengers to enable the AI to take the wheel.
RoboK is a UK startup developing 3D sensing and perception software for advanced vehicles, as well as machines in industrial settings.
CEO Hao Zheng says safety and comfort concerns will need to be considered to build trust between humans and driverless cars.
“The car need will need to plan and navigate routes whilst avoiding collisions consistently in all scenarios and also provide comfort for passengers,” she says.
“Will a passenger find a robot-taxi trustworthy when constant sudden and sharp brakes are applied on a ride? Potential collisions would have been avoided but the uncomfortable experience will undoubtedly put trust at risk.”
RoboK says its software is optimised for low-power environments and can deliver highly accurate localisation, navigation and perception services. Founded in 2017, it recently announced a partnership with Siemens Digital Industries Software which will see it create a virtual environment for testing new autonomous driving systems.
Beyond the technical side of things, Zheng says the way machines communicate with their users will be vital to building trust.
She says: “A vehicle must be able to choose the right way to communicate and interact with humans. Is it via audio warning, a lane departure warning or blindspot detection, or an evasive manoeuvre such as an automatic lane change?”
Trust in robots: how does it affect digital transformation?
While robot dogs and self-driving cars are not the reality in every business, trust in autonomous systems can be an issue to consider for any company undergoing rapid digital transformation.
Roth, an industrial engineer by background and a former engineering consultant, is researching ways human abilities can be augmented with technology to shape the future of work.
She has worked with companies from a wide range of industries, and says involving staff in the implementation of new systems is a useful way to build trust, adding: “Robots need to be integrated with the other things people use, such as manufacturing execution systems or ERP systems. If they’re not, and it’s a pain for them to use and there are no intersections, people will say ‘what’s the point’?”
While hurdles remain for those developing the next generation of autonomous systems, Tharsus CEO Palmer believes humans will adapt and learn to trust the robots around us.
“A lot of companies believe they need to automate and that they can automate,” he says.
“If you look at these market behaviours, businesses are clearly betting on the fact we will come to trust robots. And I think it’s like everything else; people adapt and accommodate as things change.” | <urn:uuid:88662518-dc36-42e9-9805-5a7e3fd70b31> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/emerging-technology/trust-in-robots-ocado-tharsus | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00778.warc.gz | en | 0.953742 | 1,739 | 3.15625 | 3 |
Server load is a dynamic quantity, and it is almost impossible to predict the performance accurately. Load balancing is necessary for distributing workloads evenly among servers or other computing resources to optimize network efficiency, reliability, and bandwidth. A physical or virtual appliance that determines which server in the pool can best meet the request performs load balancing. If one of the servers fails, the load balancer will redirect the workload to a backup server.
Load balancing applies to layers 4-7 in the 7-layer OSI model.
- L4. Directing traffic based on network data and transport layer protocols such as IP address and TCP port.
- L7. Adds content switching to load balancing, allowing routing decisions to be made based on characteristics such as HTTP header, unified resource identifier, SSL session identifier, and HTML form data.
- GSLB. Global Server Load Balancing extends L4 and L7 capabilities to servers at different sites.
How does load balancing work?
Load balancers sit between the servers that process requests and the Internet. The load balancer receives the request, routes it to the server in the pool that is available and running. When the load is high, the load balancer dynamically adds servers; if the request is low, the load balancer dynamically removes servers.
Load balancing solutions are divided into several types:
Software load balancers – run on standard hardware (desktops, PCs) and standard operating systems.
Hardware load balancers – these are specialized units that include integrated circuits (ASICs) tailored for a specific use. ASICs provide high-speed forwarding of network traffic and are often used for load balancing at the transport layer because hardware load balancing is faster compared to a software solution.
Virtual load balancers – virtualized application delivery controller software that distributes a load of network traffic to internal servers. They are used in the case of constant traffic spikes and demands for high bandwidth.
A Cloud load balancer is a type of load balancer performed in the cloud. It is the process of distributing workloads among multiple computing resources. Load balancing in the cloud reduces the costs associated with document management systems and increases the availability of resources.
Why load balancing is important to cloud computing
Load balancers are especially useful in cloud environments where high service availability and response times are critical for certain business processes.
Load balancing also plays a key role in cloud scalability, which involves running multiple virtual servers and running multiple instances of applications. The purpose of a load balancer is to distribute traffic between these new instances.
With the balancer's ability to detect unavailable servers and redirect traffic, the cloud infrastructure can cover multiple geographic regions.
Cloud4U uses VMware NSX Edge Load Balancer. It is a solution that provides routing, Firewall, NAT, DHCP, Site to Site VPN, SSL VPN-Plus, Load Balancing, High Availability, Syslog functions for a virtual data center. It is implemented as a virtual machine connected to the virtual data center networks and external networks (Internet).
Load Balancer is available to all Cloud4U users free of charge in the EdgeGateway Compact tariff. | <urn:uuid:615daec1-82ab-4573-abb7-eb5ddd2f9ad2> | CC-MAIN-2022-40 | https://www.cloud4u.com/blog/why-is-load-balancing-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00778.warc.gz | en | 0.885424 | 643 | 3.625 | 4 |
Design Thinking is an innovative approach to solve complex problems. Almost all major innovative companies apply the process of Design Thinking while developing new products and services, such as Infosys, Nordstrom, Toyota, Nike, Starbucks, PepsiCo, IBM, Intuit, etc. It is thus a part of multi-disciplinary collaborations to produce better solutions.
The term Design Thinking was coined by IDEO. Its founder Tim Brown once quoted that “Design Thinking is a human-centric approach to innovation.” As this evidently suggests, Design Thinking is the process of developing products and services revolving around the needs of the stakeholders involved. This, in turn, helps businesses and companies accomplish creative yet realistic solutions that are also desirable to the users, economically viable and technologically feasible.
The Three Lenses of Innovation: Desirability, Viability and Feasibility
Desirability focuses on understanding what the user requires and how the product/ service can incorporate the requirements. The main question to be addressed is “how well can we as a business solve our users’ problem/s?”
To test a product’s desirability, it is important to answer the following questions:
- Does the product/service solve the user’s requirements or pain points?
- Will the product or service be useful to the consumer?
- How can I help the user with their problem?
The entire scope of viability is structured on creating robust business models for the product. If the business is not able to afford the solution, it will not sustain in the market.
A product’s economic viability can be evaluated by answering the following questions:
- Can the business afford to implement the solution?
- Will the solution contribute to the profit of the organization?
- Is the solution long-lasting?
A business needs to understand it’s core technical strengths and weaknesses. A combination of an agile organization, culture and technology is more likely to produce positive results. The business should also be able to determine whether the product will be able to satisfy its specifications, including its technical functions and features.
Technological feasibility of the solution is also quite crucial. So here are some questions to help you test the same:
- Can the solution be developed into functional products/ services?
- Can the solution be built with the business’s core strengths?
- Can the business afford to support the solution with its existing operational functions?
What is the Design Thinking Process?
Broadly, Design Thinking is a five step process that innovative companies follow to create better solutions methodically. Here are the five major steps involved in the process:
- Empathize with customers
- Define problems, needs and behaviours
- Ideate solutions and challenge hypotheses
- Prototype your solution
- Test in the real world with users and iterate
Each step of the process has multiple sub-methods that help you reach the innovation sweet spot of your business, which is the balance achieved in desirability, economic viability, and technological feasibility. These steps and methods also help others who aren’t trained designers, to use creative methods to address a vast range of challenges.
Explaining Design Thinking With the Help of an Example
There is a growing need to keep senior citizens adapted to the evolving technology and company XYZ aims to do just that. Here’s how XYZ can apply Design Thinking to achieve the same.
Company XYZ’s primary goal is to understand their users and their wants and needs. The first step of the process is to empathize with the target audience, understand their problems through primary research (interviews, observations, etc.) and secondary research (trends analysis, literature review,etc ). A few questions that XYZ can address at this point are:
- Why is it hard for the target audience to adapt to the latest technology?
- What technology would better suit their needs? What technology would senior citizens prefer?
Once XYZ has gathered data regarding the problems and needs of senior citizens, they go ahead and break down the points to define the problem, their requirements and the scope. In this step they’ll also assess the technical and economical requirements and restrictions.They address questions like:
- What does the target audience need?
- What are the challenges?
- What problems do they face on using technology?
- What is the problem statement that will prove to be a guiding star for innovation?
Once the boundaries are set, XYZ ideates and develops solutions, ideas, and potential answers to the defined problems mentioned in step 1 and 2. It is at this point that teams brainstorm, listen to different perspectives, draw inspiration, and come up with a wide range of solutions that range from wild to pragmatic.
Out of these, the most realistic ideas are filtered out based on XYZ’s technical and economical limitations, and are proceeded for prototyping. This step in the process involves detailing the solution, modeling/ rendering a product, rapidly prototyping with materials to stimulate the similar experience.
XYZ goes back and forth between testing and prototyping. The prototype is tested in a real environment with actual senior citizens. Constant iteration of the solution can produce the most desired, viable, and feasible product.
Why Does Design Thinking Work?
Design Thinking is a holistic approach that involves a team of innovators, engineers, designers, and management personnel. Although it is vast, there is a beauty of structure throughout the processes and methods. It also addresses biases and subjective thoughts that can hamper innovation.
With constant iteration and collaboration, not just within internal teams but also with external stakeholders like customers, this process allows you to create a robust and intuitive solution.
Design Thinking is still relevant among organizations across industry verticals. It changes the way one tackles problems. The process allows organizations to explore new alternatives and create options that may have been overlooked otherwise.
Every successful business depends heavily on customers or its users. Major tech companies like Infosys, Intuit, IBM, Fidelity, and others have adapted the method to fulfil their requirements and formed their own Design Thinking process. | <urn:uuid:7805e746-a219-43a5-8153-e435b0e6fecf> | CC-MAIN-2022-40 | https://www.cloudsek.com/design-thinking-through-the-lenses-of-innovation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00178.warc.gz | en | 0.9401 | 1,270 | 2.734375 | 3 |
Aviation may seem like an unlikely leader on the journey to a more sustainable future, but the industry was built on a belief that anything is possible. With more organizations pledging to achieve net-zero carbon emissions by 2050, the sector has made significant strides in finding novel ways to reduce its environmental impact. And yet, the challenge facing aviation today is a steep one. The number of global air passengers is projected to nearly double over the next 20 years, leading to more fuel consumption and more carbon emissions.1
Indeed, it won’t be long before the sector sees emissions on a new scale. While the aviation industry carbon emissions are about 2% of the current annual global carbon budget, experts forecast aviation to consume between 12-27% of the remaining carbon budget through 2050 to limit global temperature rise below 1.5C above pre-industrial levels.2
Given the shorter runway to achieve net-zero by 2050, the aviation industry will need to tackle carbon emissions on multiple fronts if it wants to reach its ambitious goal. It will require a combination of industry and government initiatives, ecosystem plays, changes to the energy supply and new technologies. Also, it will require simultaneously addressing the decarbonization of both the energy supply and onboard technologies.
Timing is everything
So, what technologies hold the most promise? Decarbonizing the energy supply may take the form of sustainable aviation fuel (SAF), hydrogen or electricity stored in batteries. As for onboard technologies, our Horizon 2050 report used a holistic assessment methodology—which included extensive industry, academic and government stakeholder input and analysis—to identify key innovations.
Some technologies like composite structures and flight deck optimization software are already on their way to being adopted at scale. However, several technologies will need broader support in the mid and long term. Here’s how they break down regarding time frames:
By defining specific time frames, we developed a focused list of 11 technologies that could be mobilized over the mid-term and long-term horizons. Direct public funding, public-private partnerships and industry incentives will be instrumental in advancing the industry on its journey toward net-zero emissions by 2050. Here’s a closer look at what’s in store.
2030-2040: Reimagining smaller planes
In the near future, the industry anticipates several innovative advancements that will completely reimagine smaller planes. Think regional and narrow-body aircraft. While some of these technologies will also be applicable for more commercial wide-body aircraft, bringing them into the fray will require additional time and investment to ensure they’ve properly matured and can be scaled.
Technologies like high-pressure ratio core engines and advanced composites have been deemed the most feasible in the mid-term time frame due to higher maturity status. Hybrid-electric was observed to be less feasible due to its technical complexity and advancements required for high-energy-density batteries.
These technologies won’t reduce carbon emissions drastically—within the range of 1%-20%—due to the smaller scale of the regional aircraft segment. This segment has a high number of small aircrafts that represent a smaller percentage of the addressable market. However, they can make an impact much sooner.
2040 and beyond: Time to accelerate
Looking to 2040 and beyond, we expect to see more novel solutions come enter the market.
Technologies like transonic truss-braced wings (i.e., novel wing design), open rotor engines (i.e., more efficient engine designs) and fuel cells for onboard power have the greatest feasibility in this longer-term time horizon. Hydrogen propulsion has the lowest feasibility due to the technical complexity and infrastructure investments to enable it.
Technologies in this segment show significant potential to reduce emissions—to the tune of 1%-46%. However, due to their longer deployment runways, it will be difficult for aircraft programs to incorporate and field them sooner. Companies should highly consider accelerating the development of these longer-term technologies given the large emissions impact they promise. Since wide-body aircrafts drive 45% of aircraft emissions, scaling these technologies further could significantly decrease overall aircraft emissions, but in a post-2050 time frame.
An ecosystem of sustainability
The potential for new, exciting technologies in the mid-term and long-term is massive. With the appropriate level of support from private and public sector, they could be brought to market in the anticipated time horizons and provide promising emission impact reductions by 2050.
Private and public sector need to be aligned, however, for these technologies to succeed. Both can play a crucial role in advancing them by doing three things:
It can be difficult to imagine the aviation industry at the forefront of carbon reduction. But if we’ve learned anything from the Wright brothers, Amelia Earhart or the countless other innovators who have challenged expectations, it’s that the sky is the limit.
To learn more about our methodology, download the report Horizon 2050- A flight plan for the future of sustainable aviation. | <urn:uuid:c83e6beb-377c-463e-aba1-b54c7cc883b5> | CC-MAIN-2022-40 | https://www.accenture.com/il-en/insights/aerospace-defense/aia-report | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00178.warc.gz | en | 0.927077 | 1,023 | 3.09375 | 3 |
A human-rated centrifuge built by the Air Force Research Laboratory (AFRL) participated in a NASA space launch Saturday. The centrifuge features interchangeable cockpits and works to adapt and accommodate any customer at an accelerated pace, the U.S. Air Force said Wednesday.
AFRL also designed the centrifuge to help fast-jet aviators train in an environment with nine times the normal force of gravity. A team composed of engineers from NASA and AFRL worked to convert the centrifuge from an aircrew training platform into an astronaut testing facility.
“With the new vehicles back in the family with those G-loads, we’re basically reviving that whole G-training aspect and every astronaut will start going down this path,” said Michael Barratt, an astronaut and physician at NASA. | <urn:uuid:7d260e4b-6ed6-4c7e-abed-64f5b46a5cd8> | CC-MAIN-2022-40 | https://executivegov.com/2020/06/afrl-fields-centrifuge-in-nasa-space-launch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00178.warc.gz | en | 0.941252 | 167 | 2.546875 | 3 |
An XML firewall is an application layer firewall that specifically defends XML-based applications against a wide variety of XML message and parser level attacks. XML firewalls are generally implemented as proxies due to the requirement that incoming and outgoing messages must be inspected for vulnerabilities before being passed to the application or client.
XML firewalls are designed to address familiar Web-based attacks that can be transported via XML, such as SQL injection and cross-site scripting (XSS). They are primarily geared toward detecting and preventing XML specific attacks such as extremely large messages, highly nested elements, coercive parsing, recursive parsing, schema and WSDL poisoning, and routing based attacks.
XML firewalls improve the security of XML-based applications by preventing attacks that are likely to cause a service outage were they to be consumed by a Web application server. They remove the need for highly duplicated security-focused code within applications that can degrade performance. | <urn:uuid:c5d025c9-26d2-45f4-bde0-f7c585f55eba> | CC-MAIN-2022-40 | https://www.f5.com/services/resources/glossary/xml-firewall | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00178.warc.gz | en | 0.95632 | 188 | 2.828125 | 3 |
Quantum cryptography comes with a promise to be unhackable. While quantum computers might be decades away, the technology of quantum cryptography is much more mature. At the same time, it poses a threat to break the encryption we are using now.
That is why we should start preparing now, Eleni Diamanti, a CNRS researcher at Sorbonne University in Paris, told CyberNews. Her research focuses on experimental quantum cryptography and communication complexity and the development of photonic resources and protocols for quantum networks.
Quantum cryptography, or quantum encryption, applies the principle of quantum mechanics to encrypt communication. It uses photons to carry signals and is said to be unhackable. To send photons along, you need a rather large-scale and expensive technology - a quantum computer, which is still decades away.
Quantum encryption comes with a promise to be unbreakable, and it will provide us with a much more secure way to communicate. As Maria Korolov and Doug Drinkwater from CSO put it, it’s no silver bullet but could improve security.
Yet, it might disrupt classical encryption and may (or may not) break the cryptocurrencies.
We sat down with Diamanti, whom we first met during the MIT Tech Review Cyber Secure conference, to discuss this. We did not aim to dive into technical details about quantum technologies. Instead, we discussed the progress of quantum encryption and how far we are from having the first quantum computer.
I’ve just read that a University of Tokyo research team developed a communication method that does not rely on photons. While I’m not asking you to comment on this particular news, I wonder how quantum cryptography and quantum technology, in general, is progressing these days.
There is a big difference between the maturity of the technology in quantum computing and quantum cryptography. For me, these fields are different in the sense that, to do a very large-scale quantum computer, you need technology that is not there yet. There are several experimental platforms, from photons to superconducting qubits, to ions, or spins in semiconductors. There are all sorts of different experimental platforms for quantum computing. They all have their advantages and disadvantages and are progressing to do more and more complex computations. But, I think several milestones will have to be reached for this technology to come to an actual maturity and be useful.
On the contrary, in quantum cryptography, to show that quantum technology can be useful and offer a higher level of security, you need a lot less interaction between quantum objects than for a quantum computer. In this sense, it is more straightforward to do, which means that the maturity of systems in quantum cryptography is a lot bigger, and so there are even commercial systems, and we are progressing towards a real deployment of quantum cryptographic techniques in real life. It’s not there yet, we are not using it in our daily life, but technology is advancing really fast. So I would say that technology in quantum computing and quantum cryptography is progressing very well, but there is a big difference in terms of what needs to be done.
Why is quantum cryptography necessary?
Quantum cryptography is necessary because quantum computing is coming over. There’s an advent of quantum computers, which is a threat for current cryptographic schemes, and we need quantum cryptography to counter this threat. Although it will take some time for it to come, one needs to prepare much earlier because it takes a long time to scrutinize cryptographic techniques, to put them in place, to be ready for when the threats by the quantum computer are going to be materialized. This is the way that it should be seen. And so, although quantum cryptography is more mature, we are waiting for quantum computers to become relevant. While we are waiting for it, it is important to advance even more in quantum cryptography.
So you don’t necessarily need a quantum computer to work on quantum cryptography?
You do not need a quantum computer to do a quantum cryptography system. And while for quantum computers, there are many candidates in terms of technological platforms, for quantum cryptography, we only use photons, photonic technologies.
When are we going to have a quantum computer?
That’s a million-dollar question. What now holds a lot of promise is called noisy intermediate-scale quantum computing (NISQ) devices. These are devices that are not full-scale devices. They are noisy and sufficiently large to do useful things. There’s a lot of hope for the next few years to show that such devices can be useful. But for big, large-scale quantum computers… 10-20 years, I don’t know. It’s extremely hard to tell.
Will we need supercomputers once a quantum computer arrives?
The current trend, in both quantum computing and quantum cryptography, is to say that we don’t want to eradicate or make disappear classical technologies. Neither supercomputers nor classical cryptographic schemes should disappear. The idea would be that we can use these new quantum technologies to complement and enhance the classical technologies.
Will quantum cryptography disrupt the classical model?
That’s why we need to prepare for it now. There will be changes in cryptographic systems. Quantum cryptography is very different from classical cryptography. Classical cryptography, whenever you have an encrypted thing, they rely on mathematical algorithms. Quantum cryptography relies on physics, so you need a physical object - an optical fiber or satellite link, or something like this for a cryptographic technique to kick in. Basically, this means that there’s a need for some infrastructure chains, and because this takes a lot of time, we should start preparing so that we are ready when it comes.
Will quantum cryptography be able to break the cryptography that we are using now? There are fears that it will break cryptocurrencies (some, or course, are quite sure it will not).
It’s a real, not an imaginable threat. It’s a real threat to cryptographic techniques that we are using everywhere - in currencies and other transactions.
Is there a race for quantum computers as we see for supercomputers?
There’s definitely a race to build a quantum computer. You see all the big technology giants like Google or IBM, and between nations as well - between the US, China, Europe. For quantum cryptography, it is not quite the same. Because of all these cybersecurity issues that come with quantum cryptography, there’s a lot of interest in sovereignty in being able to develop a technology with a controlled supply chain.
Is quantum cryptography unhackable?
Is it even possible? Some people say that you can never claim that it’s unhackable. Quantum cryptography, in principle, in theory, is totally unhackable. It is not the case for classical cryptography. In practice, there are side channels, which means that sometimes, while you try to implement the system, you unwittingly deviate from the security proof. It means that you potentially open the door to a malevolent party, and you use some property of your system to attack it. This is called a side-channel attack. It’s very much scrutinized for quantum cryptography as it is for classical systems, as well. It’s an issue that is interesting for everyone. I’m pretty optimistic with respect to this. There’s a lot of progress. I don’t think it can widely comprise the security of quantum cryptographic systems, but we should not forget this aspect.
Is it prone to errors?
One party is sending photons - this quantum and coded information. In any physical system, there are imperfections. Some photons are not going to arrive, and there are going to be some errors, things that are happening in a channel. What a security proof of quantum cryptographic system does is that it considers that all of these errors are due to a malevolent party, there are no innocent errors, and they all come from the actions of an eavesdropper, and despite this, it is still possible with quantum cryptography to extract a secret key, to prove the security of your cryptographic process. These are very powerful security proofs. They consider that whatever happens, it’s due to an eavesdropper. Therefore, from the moment you can characterize them, you measure them in your system, you can cover them with your security proof. Errors per se are not destroying the security of quantum cryptographic systems. What can do that, are things that you are not aware of, that you have not taken into account in your security proof. For example, there’s a leakage in one degree of your system, and you are not aware of it.
What will the first quantum computer look like?
The way you should see the very first quantum computer is when you look at these photos of the very first classical computers - they were these huge machines that were taking up the whole room. I think this is the way you have to see it. I think the first quantum computer is going to look a bit like this. It depends on the technology. The photonics people that are trying to build quantum computers are trying to make chips and they are saying it’s going to be immediately very small. The ones from Google, who are leading the race right now with superconducting qubits, you see these huge cryostats with a lot of devices so it looks more like a big room. We have to imagine it a bit like this.
More great CyberNews stories:
VIDEO: Grindr Review: is it safe? | <urn:uuid:32d061e3-89ab-4de9-864e-e3667b5c9d62> | CC-MAIN-2022-40 | https://cybernews.com/editorial/will-quantum-cryptography-break-classical-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00178.warc.gz | en | 0.94516 | 1,955 | 3.0625 | 3 |
It is perhaps the headache of any IT head when it comes to implement policies to have a smooth running network and department. But while the essence of a good security system is evident, it is really the implementation part that is hard to accomplish.
For one, the transition and building of security awareness from various threats that can easily make their way towards an acclaimed secure network is abundant. Manually or transmitted, suspicious files will always find a way especially if you are not that adamant towards making sure that all bases are covered as far as the security of your system and data is concerned.
Many people fail to appreciate that value of the data they have gathered. They fail to appreciate the value of a strict IT policy mainly because all they care about is a workstation to use and opening files (both internal and external) as they please. So if you put all these things together, you can imagine the problems that an IT guy has to work with. But to some, taking the initiative such as passwords and some hardware exclusions has to be made.
If you notice, some drives like the usual floppy drives or even USB ports are either missing or disabled. To make them work, certain permissions and passwords are set for them to be enabled. Only the IT administrator would know these security measures and basic as they may seem, they really help a lot.
This is just a basic but effective way that IT personnel use. There are the usual network policies but for the sake of people who want to making it doubly sure, old and basic practices such as this is perhaps the best way to go. | <urn:uuid:e169bb66-5a16-4d04-a5e5-448f5a8568c7> | CC-MAIN-2022-40 | https://www.it-security-blog.com/it-security-basics/implement-a-strict-it-policy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00178.warc.gz | en | 0.976297 | 321 | 2.703125 | 3 |
Employees of federal, state and local governments; and businesses working with the government.
Agile success demands a strong and stable foundation.
To incorporate an Agile methodology or practice into your SDLC with an expectation of shredding the rigid discipline of your current method is a sure path to failure. The common misconception is that Agility means lack of order, which is not the case. Agility in software requires strong discipline. In order to successfully implement Agility, you must have a solid foundation in the practices and procedures you wish to adapt and learn how to follow those practices correctly while tying them to rigid quality goals.
This workshop will give you the foundation of knowledge and experience you need. Get the techniques, skills, and tools that enable you to build Agile discipline.
Define the principles, advantages, and disadvantages of Agile development. Get first-hand experience by organizing and participating in an Agile team. Put the concepts you learn to practice instantly in the classroom project. Understand and learn how to take advantage of the opportunities for Agile. Finally get a detailed understanding and practice the collaboration and communication needed between customer and developers for Agile to succeed.
What You’ll Learn
- Agile principles and how to build the discipline to support those principles in your everyday practice
- History of Agile and how the collection of principles and practices came together to enable customer success
- Agile methodologies, including Scrum, Extreme Programming, AgileUP,, Feature Driven Development, Lean Development, and DSDM
- Best practices from the various methodologies that will contribute to your team success
- Talk the talk: Agile terminology, roles, and forums with their context
- Walk, but not run; Walk through the processes that support Agile principles to enable the delivery of great products
- Begin to map the transition of your existing or enterprise-level processes, artifacts, and forums to Agile
- The power of Agile teams through communication, collaboration, and cadence
- Pitfalls that teams will encounter in an Agile transition and how to overcome those challenges
- Lay the foundation upon which you can build a learning team and organization
Who Needs to Attend
Project managers, analysts, developers, programmers, testers, IT manager/directors, software engineers, software architects, customers
There are no prerequisites for this course.
1. Agile Overview: Why Agile?
- Agile Methods: Principles and Practices
- Agile Benefits: What You Can Expect
- Agile Teams
2. Agile Basics
- User Roles and Personas
- User Stories
- Acceptance Criteria
- Prioritization Techniques
- Relative Estimating
- Iterative Approach: Thin Slices
3. Agile Process Framework
- Transparency: Establish and Maintain
- Main Path Communication
- Creating Collaboration
- Beyond the Team
5. Agile Approach
- What to Watch for: Barriers to success
- Agile Best Practices
- Agile Tools
- Next Steps: Specific to Your Situation!
Exercise 1: Forming the Agile Team
In this exercise, you will explore the unique factors of Agile teams and recognize the key factors for successful Agile teams.
Exercise 2: Transition to an Iterative Approach
Teams will engage in a fun exercise that will highlight the benefits behind why iterations work.
Exercise 3: Building Cadence
As with any process, the process should not be a distraction. In order to achieve that desired state, cadence is needed, team members must know what to expect repeatedly and consistently. This exercise will help reinforce the need for and power that comes with cadence.
Exercise 4: Determine What is Next for You!
Teams and individuals will collaborate with each other and with the instructor to determine what you can do to build upon the foundation established during the course. | <urn:uuid:79b5167b-6113-4479-a3f9-c63200edce19> | CC-MAIN-2022-40 | https://www.itdojo.com/courses-project-management/introduction-to-agile/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00379.warc.gz | en | 0.891687 | 846 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.