text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The New York Times article “Learning to See Data ” discusses novel ways of gleaning valuable insights from the deluge of data by presenting it visually. One of the ways involves working with an artist named Daniel Kohn.
“Advanced computing produces waves of abstract digital data that in many cases defy interpretation; there’s no way to discern a meaningful pattern in any intuitive way. To extract some order from this chaos, analysts need to continually reimagine the ways in which they represent their data — which is where Mr. Kohn comes in. He spent 10 years working with scientists and knows how to pose useful questions. He might ask, for instance, What if the data were turned sideways? Or upside down? Or what if you could click on a point on the plotted data and see another dimension?”
As the field of business intelligence aims to take the deluge of data and turn it into actionable information, understanding how the information can be presented in ways that enhance decision-making is important. I briefly discuss data visualization in Chapter 15, Advanced Analytics in my book BI Guidebook-From Data Integration to Analytics. This New York Times article is a great source of inspiration and information for anyone who is interested in the clear, effective presentation of information. You can check it out here. | <urn:uuid:7028e948-fb3d-47be-b933-a128aef74ff1> | CC-MAIN-2022-40 | https://athena-solutions.com/data-visualization-and-the-new-york-times/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00504.warc.gz | en | 0.927818 | 265 | 2.703125 | 3 |
A guide to disaster recovery testing
Disaster recovery testing is an important element of an organization's business continuity and disaster recovery plan.
A disaster recovery (DR) plan is the process of implementing detailed testing to ensure that a business can recover all data, restore business critical applications and continue operations in the event of a serious interruption to operations.
In this comprehensive guide, we'll look at how different types of disasters can affect businesses and the various aspects of DR testing, including recovery time, the testing process itself, DR best practices, the resources needed to implement recovery plans and more. We'll include a step-by-step plan to help review management objectives and enable seamless recovery processes.
Learn more about how you can protect your business and it's technology ecosystem. Download our guide today.
Defining a disaster
It can be difficult to categorize a disastrous event, but broadly speaking, disasters are either man-made or natural. So we could divide disasters loosely into even more sub-categories, allowing for some to present as 'hybrid'.
Wildfires, floods, hurricanes, mudslides, tornados and earthquakes are classed as natural disasters. Any disaster scenario that involves the elements can throw a business’s carefully laid plans and projections into serious disarray. Such catastrophic events can break supply chains, prevent employees from getting to work, and cause damage or destruction to vital facilities or equipment. That’s why disaster recovery planning should be just as high of a priority for organizations as having a proactive future growth plan.
COVID-19 (and other pandemics)
We've put this in a category of it's own because while a pandemic classifies as a 'natural disaster', it crosses over into the category of man-made, and/or biological. While COVID hasn't caused physical damage to facilities or equipment, it has certainly prevented employees from getting to work, broken supply chains and destroyed the operational structure of business environments as we'd come to know them. If there's one thing that businesses have learned from this pandemic, it's that disaster recovery planning enables business continuity.
There are many types of disasters that can affect a business's operations. For example, when an organization loses a significant manager, head of department or director. Businesses can also be affected by operational disasters when contracts are broken or a when a business environment becomes severely unfavorable for operating.
These types of disasters also have a profound effect on business operations. For example an event caused by malfunctioning technology, like network connection issues, data loss, server problems or security breaches. They generally involve some human error, so they could be classed as man-made, meaning that there is an 'identifiable cause'.
Disaster recovery testing is designed to help a business stay ahead of problems that could result in a loss of data in the future. According to the National Archives & Records Administration in Washington, 93 percent of companies that lose access to their data for 10 days or more due to a disaster, file for bankruptcy within a year.
How disasters could affect a business
Depending on specific circumstances, here are some examples of how the above types of disasters could significantly derail business continuity. Later, we'll look at how disaster recovery planning and recovery testing can safeguard against such events, reduce recovery time in the future and help restore business continuity. Every business is different, however, and the disaster recovery plan that works for one organization may be entirely unsuitable for another.
- Natural disasters - for example, fire, or flooding caused by heavy rain, or wind damage following storms. Disaster recovery testing for natural disasters involves the instigation of more specific emergency procedures , including evacuation processes.
- Theft or sabotage - theft of computer equipment, or infiltration of IT security could result in loss of data and critical files, as well as potentially holding a business to ransom. System backup on a regular basis is an important part of a DR plan
- Power cuts- loss of power could have serious consequences including prolonged downtime, affecting the ability to work effectively. Even a short period of downtime can result in a huge impact on a business's bottom line. A solid DR plan will provide backup in the event of power failure.
- IT network failure - With many organizations heavily relying on technology for the collaboration and communication needs, a network failure can disrupt important meetings and potentially result in the loss of clients or customers. Disaster recovery is an intrinsic part of every IT infrastructure.
- Loss or illness of key staff - if any of your staff is central to the running of your business, consider what would happen if they were to leave or be incapacitated by illness. A disaster recovery plan could include additional personnel training as backup.
- Outbreak of disease or infection - almost every business worldwide has recently experienced disaster recovery measures while dealing with the effects of the outbreak of an infectious disease. Disaster recovery testing in this case is ongoing, ensuring that in case of future incidents like this, a business is well prepared.
- Crises affecting the reputation of business - disaster recovery is an important consideration for wholesale and retail businesses in the event of a crisis like a product recall. A disaster like this could severely damage company reputation and potentially have a crippling effect financially.
Goals of disaster recovery testing
One of the main goals of disaster recovery testing is to find out if a DR plan can work, and meet an organization's predetermined Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements. Recovery testing also provides feedback to enterprises so they can amend their DR plan should any unexpected issues arise.in case of any unexpected issues
IT systems are rarely static in nature, so each time an organization adds a new element or installs an upgrade to the system, those additions need to be tested again. For example, storage systems and servers may have been added or upgraded, new applications deployed and older applications updated since an organization developed its original disaster recovery plan.
With more and more organizations migrating to the cloud, this is playing a larger role in an organization's IT infrastructure. A disaster recovery test helps to make sure a DR plan stays current in an IT world that changes constantly.
Does Your Company Do Disaster Recovery Testing?
Getting started with disaster recovery testing
Disaster recovery testing, as we've already mentioned, is different for every business. However, there are some basic steps that need to be taken before the actual process of testing begins.
Step 1: Perform an audit of IT resources
Before business continuity and normality can resume after a disaster, businesses need to know what 'normal' actually is. This involves identifying all the disparate assets that exist on the business network infrastructure. By creating an inventory of all of the IT resources on the network, and identifying what they contain, a business can start the process of consolidation, making it easier and more streamlined for the backup and recovery process in the future.
Step 2: Decide what is mission critical
During the audit of assets, businesses may find that a great deal of data is actually redundant, or not necessary to keep the system running. Transferring every piece of unnecessary data in the network to a backup server could use a huge amount of processing power. Sorting redundant data can help reduce the size of a backup file, saving storage space and expense.
Step 3: Create specific roles and responsibilities for all involved in the DR plan
Every employee in an organization should have a role to play in an effective disaster recovery plan. While automated disaster recovery testing serves an important purpose in a DR plan, it only tests the technical components. If a real disaster occurs, it's the people within an organization who will need to know what to do to rapidly restore uptime.
When everyone knows what to do in response to an emergency, your DR plan will be more effective than it would be if nobody knew what to do when a disaster occurs.
Step 4: Determine your recovery goals
Decide how quickly your organization needs to recover, and set your RTOs and RPOs. This could involve prioritizing which data needs to be accessed immediately, and which is less important. Data that doesn't require immediate access could be assigned a longer recover time and less frequent backups. While important data, like financials and compliance could be assigned more urgent RPOs and RTOs or even a backup server to take over for the main server in disaster recovery process.
Step 5: Implement a cloud data storage solution
Disasters like cyber attacks and ransomware attacks could destroy an organization's primary data storage solution, resulting in the permanent loss of that data. Cloud-based solutions can automatically download and copy data every few days (or even every few hours). Unlike older, manual backup methods requiring users to copy data to a disk or USB drive, backups via a cloud-based solution can be carried out at any time, and without having to access physical media.
Another example is if physical assets storing your data are damaged, by fire, flood, or human tampering, remote data backup can help minimize business disruptions.
Disaster recovery plan review
Here, the DR plan owner and other members of the team behind its development and implementation closely review the plan, to find any inconsistencies or missing elements.
Much like a first rehearsal, stakeholders walk step by step through all the components of a disaster recovery plan. This helps determine if everyone knows what they are supposed to do in case of an emergency and uncovers any inconsistencies, missing information or errors.
Simulating disaster scenarios is a good way to see if the disaster recovery procedures and resources, including backup systems and recovery sites allocated for disaster recovery and business continuity work. A simulation involves running a variety of disaster scenarios to see if the teams involved in the DR process can restart technologies and business operations quickly and effectively. This process can determine if there is sufficient staff to get the DR plan executed properly.
Image source: TechTarget https://cdn.ttgtmedia.com/rms/onlineimages/disaster_recovery-bcdr_planning_scenarios_desktop.png
With your DR plan in place, and team members ready for any scenario, disaster recovery testing can go ahead. But a disaster recovery plan is only as good as its weakest link, so organizations should commit to regular disaster recovery testing. The frequency of testing depends on the business but this is another element of a DR plan that needs to be determined ahead of time.
Why monitoring and performance management should be part of every DR plan
Having third party monitoring tools in place can actually help to avoid certain disasters or at least reduce their severity. Monitoring can pick up anomalies within a system, and identify potential issues. IR's Collaborate suite of performance solutions provide the insights an organization needs to make proactive business decisions, formulate effective DR plans, and create an efficient production environment.
- Comprehensive monitoring, surveillance, alerting, and reporting helps you meet and manage your SLAs by ensuring your systems and applications are running at peak performance.
- Gather real time intelligence across a wide range of data points and criteria
- Customizable dashboards provide deep visibility that can help identify problems in real time. This allows you to take immediate action to solve issues before they impact the broader business.
Disaster recovery testing checklist
- Clearly identify goals, objectives, procedures to create a post-testing analysis. Create a test team, including subject matter experts and make sure everyone is available for the planned testing date
- Determine exactly what to test
- Carefully document and be prepared to edit your DR plan and disaster recovery testing scripts
- Include all relevant technology elements and processes being tested in the plan
- Ensure the test environment is ready, and won't affect production systems or conflict with other activities
- If testing is going to take a significant amount of time, schedule it far in advance
- Perform a practice exercise before the disaster recovery test goes live to uncover and fix potential problems
- Stop and review the test when issues arise and reschedule if necessary
- Keep comprehensive records of start and end times, what occurred, what worked and what didn't.
- Update disaster recovery and business continuity plans and other documents based on what's been learned from the DR test. | <urn:uuid:2458cf90-a1c0-4ce8-8c00-09f57ccbc5bf> | CC-MAIN-2022-40 | https://www.ir.com/guides/disaster-recovery-testing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00504.warc.gz | en | 0.939686 | 2,497 | 2.78125 | 3 |
This chapter presents a number of scenarios to give an impression of the types of activities that are performed by people who run networks for a living. We refer to them collectively as network managers, although they perform a wide variety of functions that have more specialized job titles. In fact, strangely enough, the term network manager is rarely used for the people involved in managing networks. Instead, terms such as network operator, network administrator, network planner, craft technician, and help desk representative are much more common. Each of those terms refers to a more special function that is only one aspect of network management.
The chapter also provides an overview of some of the tools network managers have at their disposal to help them do their jobs. The intention is to give you a taste of the kinds of tasks and challenges that network managers face and how network management tools support their work.
Ultimately, the network management technology introduced in this book exists in an operational context. Although this idea might seem self-evident, it must be understood and emphasized, particularly for people who are not themselves users but are providers of network management technology—application providers, equipment vendors, and systems integrators. Network management involves not just technology, but also a human dimension—how people use management tools and management technology to achieve a given purpose, and how people who perform management functions and who are ultimately responsible for the fact that networks and networking services are running smoothly can best be supported. In addition, the organizational dimension must be considered—how the tasks and workflows are organized, how people involved in managing a network work together, and what procedures they have in place and must follow to collectively get the job done.
Reading this chapter will help you understand the following:
- The types of tasks that people involved in the day-to-day operations of networks face
- How network management technology supports network operators in those tasks
- The different types of management tools that are available to help people running a network do their job
A Day in the Life of a Network Manager
Let us consider some typical scenarios people face as they run networks. No single scenario is representative by itself. Scenarios differ widely depending on a number of factors. One factor is the type of organization that runs the network. We refer to this organization as the network provider. The IT department of a small business, for example, runs its network quite differently than the IT department of a global enterprise or, for that matter, a global telecommunications service provider. Another factor is the particular function that the network manager plays within the organization. An administrator in an IT department, for example, has different responsibilities than a field technician or a customer-facing service representative. To cover the diversity of possible scenarios, this chapter examines the roles of several network managers.
The examples in this chapter are intended to be illustrative. Therefore, they are by no means comprehensive. The examples contain simplifications, and, in reality, the details described differ widely among network providers. Even people who have the same job description might perform their job functions in different ways. Ultimately, how they manage their networks differentiates network providers from one another, hence the presented scenarios should not be expected to be universally the same. Finally, don't worry if you are not familiar with all the networking details that are contained in the examples; they constitute merely the backdrop against which the storylines play out.
Pat: A Network Operator for a Global Service Provider
Meet Pat. Pat works as a network operator at the Network Operations Center (NOC) of a global service provider that we shall call GSP. She and her group are responsible for monitoring both the global backbone network and the access network, which, in essence, constitutes the customer on-ramp to GSP's network. This is a big responsibility. Several terabytes of data move over GSP's backbone daily, connecting several million end customers as well as a significant percentage of global Fortune 500 companies. Even with the recent crisis in the telecommunications industry, GSP is a multibillion-dollar business whose reputation rests in no small part on its capability to provide services on a large scale and global basis with 99.999% (often referred to as "five nines") service availability. Any disruption to this service could have huge economic implications, leading to revenue losses of millions of dollars, exposing GSP to penalties and liability claims, and putting jobs in jeopardy.
Pat works directly in command central in a large room with big maps of the world on screens in front, showing the main sites of the network. Figure 2-1 depicts such a command central.
Figure 2-1 An Example of a Command Central Inside a NOC
(Figure used with kind permission from ish GmbH&Co KG)
In addition to the big maps, several screens display various pieces of information. For example, they show statistics on network utilization, information about current delays and service levels experienced by the network's users, and the number of problems that have been reported in different geographic areas. This gives everybody in the room a good overall sense of what is currently going on—whether things are in crises mode or whether everything is running smoothly.
Normally, everything on the map appears green. This means that everything is operational and that utilization on the network is such that even if an outage in part of the network were to occur, network traffic could be rerouted instantly without anyone experiencing a service outage. The network is designed to withstand outages and disruptions in any one part of the network. However, Pat still remembers the anxiety that set in on a couple occasions when suddenly links or even entire nodes on the map turned yellow or red. Once, for example, a construction crew dug through one of the main fiber lines that connect two of GSP's main hubs. And who could forget 9/11, when suddenly millions of people wanted to call into New York at the same time, while at the same time seemingly every news organization in the world requested additional capacity for their video feeds?
On Pat's desk is an additional, smaller screen that shows a list of problems that have been reported about the network. Pat has been assigned to monitor a region of the southeastern United States for any problems and impending signs of trouble. Pat sees on her screen a list of so-called trouble tickets, which represent currently known problems in the network and are used to track their resolution.
Those trouble tickets have two sources: problems that customers have reported and problems in the network itself. Let's start with customer-reported problems.
For every call that is received from a customer about a network problem, one of the customer service representatives at the help desk in building 7 opens a trouble ticket. The rep provides what GSP refers to as "tier 1 support." Those service reps have their own procedures. The person who first answers the call records a description of the problem, according to the customer, and asks the customer a series of questions, depending on the type of problem reported. If the service rep cannot help the customer right away, the customer is transferred to someone who is more experienced in troubleshooting the problem. That person is part of the second support tier. If this more experienced rep cannot solve the problem, or if it takes him or her too long to do so, the ticket is assigned to the people in Pat's group and shows up on Pat's screen. Pat's group provides the third tier of support.
The tickets contain a description of the problem, who is affected, and contact information. At least, this is what they are supposed to contain; sometimes Pat's group gets tickets with little or no information. In those cases, someone from Pat's group must call the service rep who first entered the ticket and find out more, which is always painful for everyone involved. It can be embarrassing when, in the worst case, Pat's co-workers need to call the customer back and the customer realizes that GSP is only starting to follow up on a serious problem hours after it was reported.
The second source of tickets is the network itself. These tickets are reported by systems that monitor alarm messages sent from equipment in the network. The problem with alarm messages is that they rarely indicate the root cause of the problem; in most cases, they merely reflect a symptom that could be caused by any number of things. Pat doesn't see every single alarm in the network—that would be far too many. For this reason, the alarm monitoring system tries to precorrelate and group alarm messages that seem to point to the same underlying problem. For each unique problem that alarm messages seem to point to, the alarm monitoring system automatically opens a ticket and attaches the various alarm messages to it, along with an automated diagnosis and even a recommended repair action. Ideally, the underlying problem can be corrected and the ticket closed before customers notice service degradation and corresponding customer-reported trouble tickets are opened.
Seeing messages grouped in this way is much more practical than having to deal with every single alarm individually. The sheer volume of alarms would quickly overwhelm Pat and her group. Also, tickets that are system generated are typically issued against the particular piece of equipment in the network that seems to be in distress. This makes system-generated tickets a little easier to deal with than customer-generated tickets, which often leave Pat's group feeling puzzled over where to start.
Pat remembers that tickets generated by alarm applications were problematic in the past. Often many more trouble tickets were generated than there were actual problems, so Pat sometimes saw 20 tickets that all related to the same problem. However, GSP has made significant progress in recent years—system-generated trouble tickets have become pretty accurate, with redundant tickets generated only in a small portion of cases. GSP's investment in developing better correlation rules for their systems paid off. Although Pat is an operator, not a developer, she knows that she was an important part of the development process because she provided much of the expertise that was encoded into those correlation rules. She still remembers being interviewed by a group of consultants for that purpose. During numerous sessions over the span of several months, they asked about how she determined whether problems that were reported separately were related.
Of course, despite all the progress made, many tickets still relate to the same underlying root cause. Many of those are tickets that were not automatically generated but instead were opened by customers. Perhaps a particular component in the access network through which customers were all connected to the network has failed, causing all of them to report a problem.
When clicking on a trouble ticket, Pat can see all the information associated with it. Pat must first acknowledge that she has read each ticket that comes in. If she does not acknowledge the ticket, it is automatically escalated to her supervisor. In busy times, this feels almost like a video game: Whenever a new ticket appears on the screen, she effectively "shoots it down" to stop it from flashing. Of course, acknowledging is only the first step. Next, Pat must analyze the ticket information. For the most part, her tasks are fairly routine. First she checks whether there are other tickets that might relate to the same problem. If there are, she attaches a note to the ticket that points to the other ticket(s) already being worked on. The system is intelligent enough to update the information in the other ticket to cross-reference the new one, thereby providing additional information that could prove useful in resolving it. This effectively leads to a hierarchy of tickets in which the original ticket constitutes a master ticket and the new ticket becomes a subordinate to the master. Pat then tables the resolution of the subordinate ticket until the master ticket that is already being worked on is resolved. At that point, she revisits the ticket to see whether the problem still exists or whether it can be closed also.
If she does not identify an existing ticket that might be related, she starts diagnosing the root cause of the problem. Let us assume that, in this case, the ticket was opened by a customer. Pat brings up the service inventory system to check which pieces of equipment were specifically configured to help provide service for that customer. With this knowledge, she brings up the monitoring application for the portion of the network that is affected to see for herself what is going on. This application offers her a view with the graphical representation of the device from which she can see the current state of the device, how its parameter settings have been configured, and the current communications activity at the device. She begins troubleshooting, starting with verifying the symptoms that are reported in the network.
In some cases, Pat eventually decides that a piece of equipment needs to be replaced, such as a card in a switch. In those cases, she brings up another tool, a work order system. She creates a new work order and specifies which card needs to be replaced. She enters the identifier of the trouble ticket as related information. This automatically populates the fields in the work order that identify the piece of network element, and also where it is located. Pat considers this to be a particularly nice feature. In the old days, she had to manually retype this information and also look up the precise location of the network element in the network inventory system. Now all those back-office systems are interconnected. She enters additional comments and submits the work order, and off it goes. This is all that she has to do for now.
It is not Pat's responsibility to dispatch a field technician or to check the inventory for spare parts; this is the job of her colleagues in the group that processes and follows up on equipment work orders. Actually, there are several groups, depending on where the equipment is located. Sometimes the equipment is in such a remote location that people have to physically get out there—"roll a truck," they call it. This is often the case for equipment in the access network. As mentioned earlier, the access network is the portion of the network that funnels network traffic from the customer sites to GSP's core network. In other cases—specifically, when the core network is affected—the equipment is at the NOC, in an adjacent building. Pat was once able to peek inside a room with all the equipment—many rows of rack-mounted equipment, similar to Figure 2-2.
Figure 2-2 Rack-Mounted Network Equipment
Pat's friends tell her that the NOC equipment is more compact than it is used to be, but Pat still finds it very impressive, especially the cables (cables are shown in Figure 2-3). Literally hundreds, if not thousands, of cables exist; taken together, they would surely stretch across many miles. You would never want to lose track of what each cable connects to. Although it all looks surprisingly neat, Pat can only imagine what a challenge it must be to move the NOC to a different location if that ever becomes necessary.
Figure 2-3 Cabling and Equipment Backside
(Figure used with kind permission from ish GmbH&Co KG)
Pat knows that the groups that do equipment work orders operate in similar fashion to her own group. The workflows are all predefined, and their work order system takes them through the necessary steps, autoescalates things when necessary, and generally makes sure that nothing can fall through the cracks—for example, it ensures that a work order does not sit unattended for days. It's impressive how integrated some of the procedures have become. For example, Pat has heard that when the technicians exchange a part, they scan it using a bar-code scanner that automatically updates the central inventory system. The system then warns them right away if they are scanning a different component than the one they are supposed to enter with the work order. In the past, occasional mismatches occurred between the equipment that was deployed and the equipment that was supposed to be there. This could lead to all kinds of problems—for example, equipment might be preconfigured in a certain way that would then no longer work as planned, or the installed equipment had different properties than expected. Those were rare but nasty scenarios to track and resolve.
Pat notes in the trouble ticket what she did and enters the identifier of the work order and when resolution is expected. For now, she is finished.
When the work order is fulfilled, Pat will find in her in-box a notification from the work order system identifying the trouble ticket that was linked to the work order and that should now be resolved. When she receives this notification, she does a quick sanity check to see if everything is up and running, and then closes the ticket for good.
When Pat first started her job, she was sometimes tempted to close the tickets right away without doing the check. Her department kept precise statistics on the number of tickets that she processed, the number of tickets that she had outstanding or was currently working on, the average duration of resolution for a ticket, and the number of tickets that had to be escalated. Of course, Pat wanted those numbers to look good because they were an indication of her productivity. Therefore, it was seemingly rewarding to take some shortcuts. It appeared that even in the unexpected case that a problem had not been resolved, someone would simply open a new ticket and no harm would be done. However, Pat soon learned that any such procedure violation would be taken extremely seriously. She now understands that procedures are essential for GSP to control quality of the services it provides. Doing things the proper way has therefore become second nature to her.
Chris: Network Administrator for a Medium-Size Business
Meet Chris. Together with a colleague who is currently on vacation, Chris is responsible for the computer and networking infrastructure of a retail chain, RC Stores, with a headquarters and 40 branch locations. RC Stores' network (see Figure 2-4) contains close to 100 routers: typically, an access router and a wireless router in the branch locations, and additional networking infrastructure in the headquarters and at the warehouse.
Figure 2-4 RC Stores' Network
The company has turned to a managed service provider (MSP) to interconnect the various locations of its network. To this end, the MSP has set up a Virtual Private Network (VPN) with tunnels between the access routers at each site that connects all the branch locations and the headquarters. This means that the entire company's network can be managed as one network. Although the MSP worries about the interconnectivity among the branch offices, Chris and his colleagues are their points of contact. Also, the contract with the MSP does not cover how the network is being used within the company. This is the responsibility of Chris and his colleagues.
Chris has a workstation at his desk that runs a management platform. This is a general-purpose management application used to monitor the network. At the core of the application is a graphical view of the network that displays the network topology. Each router is represented as an icon on the screen that is green, yellow, orange, or red, depending on its alarm state. This color coding allows Chris to see at first glance whether everything is up and running.
Even though the network is of only moderate size, displaying the entire topology at the same time would leave the screen pretty cluttered. Chris has therefore built a small topology map in which multiple routers are grouped into "clusters" that are represented by another icon. Each cluster encompasses several locations. In addition, there is a cluster each for the headquarters and the warehouse. This configuration enables Chris to display only the clusters and thereby view the whole network at once. Chris can also expand ("zoom into") individual clusters when needed to see what each consists of. As with the icons of the routers, the icons for the clusters are colored corresponding to the most severe alarm state of what is contained within. This way, Chris does not miss a router problem, even though the router might be hidden deep inside a cluster on the map. As long as the cluster is green, Chris knows that everything within it is, too. Figure 2-5 shows an example of a typical screen for such a management application.
Figure 2-5 A Typical Management Application Screen (Cisco Packet Telephony Center)
Mike calls from upstairs. Someone new is starting a job in finance tomorrow and will need a phone. Chris notes this in his to-do list. He will take care of this later. First, he is trying to get to the bottom of another problem.
Chris received some complaints from the folks at the Richmond branch that the performance of their network is a little sluggish. They have been experiencing this problem for a while now; they first complained about it ten days ago when access to the servers was slow. At the time, Chris wondered whether this was really a problem with the network or with the server. As an end user, there was really no way to tell the difference. Eventually, the problem went away by itself and Chris thought it might have been just a glitch. Then three days ago, the same thing happened, and it did this morning again. This time Chris tried accessing the server himself with the Richmond people on the call but did not notice anything unusual.
Chris thinks that perhaps it really is a problem with the network. He wonders whether the MSP really gives them the network performance that they have promised. The MSP sold Chris's company a service with 2 Mbps bandwidth from the branch locations and "three nines" (99.9%) availability from 6 am until 10 pm during weekdays, 98% during off hours. The people from the MSP did not contact Chris to indicate that there was a problem on the MSP's side, but maybe they don't know—and besides, why would they worry if they didn't get caught? Chris wonders whether he should have signed up for MSP's optional service that would have allowed him to view the current service statistics, as seen from the MSP's perspective, in near-real time over the web. Although Chris doesn't think the MSP can be entirely trusted, this would have provided an interesting additional data point.
From his management platform, Chris launches the device view for the router at the edge of the affected branch by clicking the icon of the topology map. The device view pops up in a window and contains a graphical representation of the device from which the current state, traffic statistics, and configuration parameter settings can be accessed. Currently, not much traffic appears to be going across the interface. From another window, Chris "pings" the router, checking the roundtrip time of IP packets to the router. Everything looks fine.
Chris decides that this problem requires observation over a longer period of time, so he pulls up a tool that enables him to take periodic performance snapshots. He specifies that a snapshot should be taken every 5 minutes of the traffic statistics of the outgoing port. Chris also wants to periodically measure the network delay and jitter to the access router at company headquarters and to the main server. The tool logs the results into a file that he can import into a spreadsheet. Spreadsheets can be very useful because they can plot charts, which makes it easy to discover trends or aberrations in the plotted curves. (Of course, sometimes management applications support some statistical views as well, as shown in Figure 2-6.)
Figure 2-6 Sample Screen of a Management Application with Performance Graphs (Cisco Works IP Performance Monitor)
For now, that seems all that he can do. Chris takes a look at his to-do list and decides to take care of the request for the new phone. He doesn't know whether they have spare phones, so he goes to the storage room to check. One is left, good. He will have to remember to stock up and order a few more. He then peeks at the cheat sheet that he has printed and pinned in his cubicle, which has the instructions on what to do when connecting a new user. Most phones in RC Stores' branch locations are assigned not to individual users, but to a location, such as a cashier location, so changes do not need to be made very often.
RC Stores recently replaced its old analog private branch exchange (PBX) system with a new Voice over IP (VoIP) PBX. This enables the company to make internal phone calls over its data network. It also has a gateway at headquarters that enables employees to make calls to the outside world over a classical phone network, when needed. Chris remembers that, to make phone calls, the old PBX worked just fine, but programming the phone numbers could be a pain. Phone numbers were tied to the PBX ports, so he had to remember which port of the PBX the phone outlet was connected to so he could program the right phone number. Because RC Stores had never bothered documenting the cabling plan in the building, there were sometimes unwelcome surprises. Connecting one new user wasn't that bad, but Chris would never forget when they were moving to a new building and he and his colleague spent all weekend to get the PBX network set up to ensure that everyone could keep their extensions.
Now it is a simpler. Chris jots down the MAC address from a little sticker on the back of the IP phone and brings up the IP PBX device manager application. He also gets his sheet on which he notes the phone numbers that are in use. His method to assign phone numbers is nothing fancy. He has printed a table with all the available extensions. Jotted on the table in pencil is the information on whether a phone number is in use. Chris selects a number that is free, crosses it out, and notes the name of the new person who is assigned the number, along with the MAC address of the phone.
Chris then goes into the IP PBX device manager screen to add a new user. The menu walks him through what he needs to do: He enters the MAC address and the phone extension, along with the privileges for the phone. In this case, the user is allowed to place calls to the outside. Now all that remains to be done is to add voice mail for the user. He starts another program, the configuration tool for the user's voice-mail server. RC Stores decided to go with a different vendor for voice mail than for the IP PBX. Chris often moans over that decision. Although having different vendors resulted in an attractive price and a few additional features, he now has to administer two separate systems. Not only does he need to retype some of the same information that he just entered, such as username and phone number, but he also needs to worry about things such as making separate system backups. Chris leaves the capacity of the voice mail box at 20 minutes, as the application suggested for the default; it is the company's policy that everyone gets 20 minutes capacity except department heads and secretaries, who get an hour.
The phone extension is now tied to the phone itself, regardless of where on the network it is physically plugged in. Chris walks over to the Human Resources (HR) person upstairs and asks where the new employee will sit. He carries over the phone right away, plugs it into the outlet, and makes sure that it works. He must remember to send a note to HR to let them know the number he assigned so they can update the company directory. Chris has been intending for some time to write a script that provisions new phones and automatically updates the company directory at the same time. Unfortunately, he has not gotten around to it yet. Maybe tomorrow.
Chris goes back to his desk and checks on the performance data that is still being collected. Things look okay; he will just let it run until the problem occurs again so that he has the data when it is needed. In addition, he decides that he wants to be notified right away when sluggish network performance is experienced. He goes again into his management platform and launches a function that lets him set up an alert that is sent when the measured response time between any two given points in the network exceeds a certain amount of time. He configures it to automatically check response time once per minute and to send him an alert to his pager when the response time exceeds 5 seconds. He hopes that this will give him a chance to look at things while the problem is actually occurring, not after the fact.
Chris realizes that the response time is needed for two purposes—once for the statistics collection function, once for the alerting function. Currently, there is no way to tie the two functions together. Therefore, the response times will simply be measured twice. Although this is not the most efficient method, there is no reason for Chris to worry about it.
Thinking about it, Chris suspects that the problem is related to someone initiating large file transfers. Perhaps an employee is using the company's network to download movies from the Internet. If this is the case, it would be a clear violation of company policy. Not only does it represent an abuse of company resources, but, more important, it also introduces security risks. For example, someone could download a program containing a Trojan horse from the outside and then let it run on the company network. Of course, Chris has set up the infrastructure to regularly push updates of the company's security protection software to the servers, but this alone does not protect against all possible scenarios. All the efforts to secure the network against attacks from the outside do not help if someone potentially compromises network security from the inside. Chris thinks that this hypothesis makes sense. The gateway that connects the company to the Internet is located at headquarters, and from the remote branch someone would have to go first via the company's VPN to that gateway to go outside. The additional traffic on the link between the remote branch and headquarters might be enough to negatively affect other connected applications. So maybe the problem resides with RC Stores after all, not with the MSP.
In any event, Chris knows that when the symptom occurs again, he will be able to find out what is going on by using his traffic analyzer, another management tool. He will be able to pull up the traffic analyzer from his management station to check what type of data traffic is currently flowing over a particular router—the gateway to the Internet, in that case—and where it originates.
Before Chris leaves in the evening, he forwards his phone extension to his mobile, in case something comes up. Also, he brings up the function in the alarm management portion of his management platform application and programs it to send him a page if an alarm of critical severity occurs, such as the failure of an access router that causes a loss in connectivity between a branch and headquarters. Chris has remote access to the VPN from home and can log into his management application remotely, if required.
Sandy: Administrator and Planner in an Internet Data Center
Meet Sandy. Sandy works in the Internet Data Center for a global Fortune 500 company, F500, Inc. The data center is at the center of the company's intranet, extranet, and Internet presence: It hosts the company's external website, which provides company and product information and connects customers to the online ordering system. More important, it is host to all the company's crucial business data: its product documents and specifications, its customer data, and its supplier data. In addition, the data center hosts the company's internal website through which most of this data can be accessed, given the proper access privileges.
F500, Inc.'s core business is not related to networking or high technology; it is a global consumer goods company. However, F500, Inc. decided that the functions provided by the Internet Data Center are so crucial to its business that it should not be outsourced. In the end, F500, Inc. differentiates itself from other companies not just through its products, but by the way the company organizes and manages its processes and value supply chains—functions for which the Internet Data Center is an essential component.
Sandy has been tasked with developing a plan for how to accommodate a new partner supplier. This will involve setting up the server and storage infrastructure for storing and sharing data that is critical for the business relationship. Also, an extranet over which the shared data can be accessed must be carved out. The extranet constitutes essentially its own Virtual Private Network that will be set up specifically for that purpose.
Sandy has a list of the databases that need to be shared; storage and network capacity must be assessed. Her plan is to set up a global directory structure for the file system in such a way that all data that pertains to the extranet is stored in a single directory subtree—perhaps a few, at most. She certainly does not want the data scattered across the board. Having it more consolidated will make many tasks easier. For example, she will need to define a strategy for automatic data backup and restoration. Of course, Sandy does not conduct backups manually; the software does that. Nevertheless, the backups need to be planned: where to back up to, when to back up, and how to redirect requests to access data to a redundant storage system while the backup is in progress.
Sandy's main concern, however, is with security. Having data conceptually reside in a common directory subtree makes it much easier to build a security cocoon around it. Security is a big consideration—after all, F500, Inc. has several partners, and none of them should see each other's data. A major part of the plan involves updating security policies—clearly defining who should be able to access what data. Those policies must be translated into configurations at several levels that involve the databases and hosts for the data, as well as the network components through which clients connect.
Several layers of security must be configured: Sandy needs to set up a new separate virtual LAN (VLAN) that will be dedicated to this extranet. A VLAN shares the same networking infrastructure as the rest of the data center network but defines a set of dedicated interfaces that will be used only by the VLAN; it allows the effective separation of traffic on the extranet from other network traffic. This way, extranet traffic cannot intentionally or unintentionally spill over to portions of the data center network that it is not intended for. The servers hosting the common directory subtree with the shared data will be connected to that VLAN. Sandy checks the network topology and identifies the network equipment that will be configured accordingly.
Figure 2-7 shows a typical screen from which networks can be configured. This particular screen allows the user to enter configuration parameters for a particular type of networking port.
Figure 2-7 Sample Screen of a Management Application That Allows the Configuration of Ports (Cisco WAN Manager 15.1)
In addition, access control lists (ACLs) on the routers need to be set up and updated to reflect the new security policy that should be in effect for this particular extranet. ACLs define rules that specify which type of network traffic is allowed between which locations, and which traffic should be blocked; in effect, they are used to build firewalls around the data. This creates the second layer of security.
Finally, authentication, authorization, and accounting (AAA) servers need to be configured. AAA servers contain the privileges of individual users; when a client has connectivity to the server, access privileges are still enforced at the user and application levels. Any access to the data is logged. This way, it is possible to trace who accessed what information, in case it is ever required, such as for suspected security break-ins.
However, before she can proceed with any of that, Sandy needs to assess where the data will be hosted and any impact that could have on the internal data center topology. After all, without knowing what servers should be connected, it is premature to configure anything else. When the partner comes online, demand for the affected data is sure to increase.
Sandy pulls up the performance-analysis application. She is not interested in the current status of the Internet Data Center because operations personnel are looking after that. She is looking for the historical trends in performance and load. Sandy worries about the potential for bottlenecks, given that additional demand for data traffic and new traffic patterns can be expected. She takes a look at the performance statistics for the past month of the servers that are currently hosting the data. It seems they are fairly well utilized already. Also, disk space usage has been continuously increasing. At the current pace, disk space will run out in only a few more months. Of course, some of the data that is hosted on the servers is of no relevance to the partnership; in effect, it must be migrated and rehosted elsewhere. This should provide some relief. Still, it seems that, at a minimum, additional disks will be needed. Given the current system load, it might be necessary to bring a new server with additional capacity online and integrate it into the overall directory structure. Sandy might as well do this now. This way, she will not need to schedule an additional maintenance window later and can thus avoid a scheduled disruption of services in the data center.
Of course, the fact that data is kept redundantly in multiple places will be transparent (that is, invisible) to applications. All data is to be addressed using a common uniform resource identifier (URI). The data center uses a set of content switches that inspect the URI in a request for data and determine which particular server to route the request to. The content switch can serve as a load balancer in case the same data and same URI are hosted redundantly on multiple servers. The content switch is another component that must be configured so it knows about the new servers that are coming online and the data they contain. Sandy makes a mental note that she will need to incorporate this aspect into her plan.
This should suffice for now as an impression of the professional lives of Pat, Chris, Sandy, and many other people involved in running networks. At this point, a few observations are key:
- Pat, Chris, and Sandy handle their jobs in different ways. For example, in Pat's case, there are many specialized groups, each dealing with one specific task that represents just a small portion of running the network. On the other hand, Chris more or less needs to do it all. Sandy is less involved in the actual operations but more involved in the planning and setup of the infrastructure. This work includes not just network equipment, but computing infrastructure as well. There is no "one size fits all" in the way that networks are run.
- Pat, Chris, and Sandy all have different tools at their disposal to carry out their management tasks. We take a look at some of the management tools in the next section. Not all tools that they use are management systems; in Chris's case, we saw how a spreadsheet and a piece of paper can be effective management tools.
- A major aspect of Pat's job is determined by guidelines, procedures, and the way the work is organized. Systems that manage operational procedure and workflows are as much part of network management as systems that communicate with the equipment and services that are being managed. Their importance increases with the size and complexity of the network (and network infrastructure) that needs to be managed.
- Some tasks are carried out manually; some are automated. There is no one ideal method of network management, but there are alternative ways of doing things. Of course, some are more efficient than others.
- Management tasks involve different levels of abstraction and, in many cases, must be broken down into lower-level tasks. Chris and Sandy both were at one level concerned with a service (a voice service in one case, an extranet in the other case), yet they had to translate that concern into what it meant for individual network elements. Sandy had to worry about how security policies at the business level, that state which parties are allowed to share which data, could be transformed into a working network configuration that involved a multitude of components.
- Many functions are involved in running a network—monitoring current network operations, diagnosing failures, configuring the network to provide a service, analyzing historical data, planning for future use of the network, setting up security mechanisms, managing the operations workforce, and much more.
- Integration between tools affects operator productivity. In the examples, we saw how Pat's productivity increased when she was supported by integrated applications, which, in that case, included a trouble ticket, a work order, and network monitoring systems. Chris, on the other hand, had to struggle with some steps that were not as integrated, such as needing to keep track of phone numbers in four different places (company directory, number inventory, and IP PBX and voice-mail configuration).
Later chapters will pick up on many of the themes that were encountered here, after discussing the technical underpinnings of the systems that enable Pat, Chris, and Sandy do their jobs. Before we conclude, however, let us take a look at some of the tools that help network providers manage networks. | <urn:uuid:69e38174-8bc2-48fa-ac72-8530a73e5546> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=680834 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00504.warc.gz | en | 0.965231 | 8,344 | 2.8125 | 3 |
Finance and Accounting
A quick guide to financial forecasting
In business development, the past and the present are indicators of what the future may look like. Leaders and CFOs need to keep an eye on the future of their business and here’s where reliable financial forecasting becomes crucial to driving business growth. Without it, you are just flying blind with no navigational support.
What is financial forecasting?
Financial forecasting involves forecasting or projecting the future performance of a business. Although it may sound similar to fortune telling using a crystal ball, financial forecasting is a well-defined method that organisations use to anticipate future revenues, costs, and cash flows using past data, external market conditions, and economic factors to simulate future scenarios.
The benefits of financial forecasting
Financial forecasting helps businesses in understand where they are headed as well as in strategising measures for growth. Financial planning and forecasting work together — the insights from forecasting help businesses in building accurate strategies for budgeting, cash flows, and other financials during a fiscal period. Moreover, mitigating risks and setting realistic goals with the help of forecasting help businesses avoid overspending and diverting capital on uncertain investments.
Financial forecasting helps in identifying problem areas through data analysis. Finally, regular financial forecasting benefits the image of the company. It also provides investors with an insight into the business goals and potential returns on their investments. Mergers and acquisitions also demand financial forecasts to assess the future of the company.
Financial forecasting methods
Financial forecasting involves using pro forma or forecasting financial statements that form the basis of further analysis. These projections are based on both internal and external assumptions. The three important pro forma statements are:
- Income statement: It shows the profit or loss over a period. Financial projections show how various variables may affect income, cost of goods sold, and other factors impacting the bottom line.
- Cash flow statement: It shows the possible amount of cash coming in and going out in the future. Income statement and balance sheet forecasts help determine the future cash status, which is critical to running a business.
- Pro forma balance sheet: It shows the company’s position at a point in time. Through projections of debtors, creditors, cash collections, and other aspects, the pro forma balance sheet shows what the position of the business will be in the future.
Financial forecasting methods are classified into two types:
Quantitative forecasting methods produce accurate forecasts with numbers and past business data.
- The percent of sales method uses historical data, which is expressed as a percentage of sales, such as profit or cost of goods sold, and applies the same growth rate for other future financial metrics.
- The straight line method posits that the business’ historical growth rate will remain constant. Forecasting future revenue entails multiplying a company's sales from the previous year by its growth rate. However, it does not account for market fluctuations or other factors that may affect growth rate.
- The moving average method is more granular and involves considering the highs and lows of historical data and taking the weighted average of previous years to forecast the future. This method is beneficial for short-term forecasting.
- Simple linear regression shows the relationship between two variables: one dependent and another independent. The dependent variable provides the forecast amount, while the independent variable is the factor that affects the dependent variable.
- Multiple linear regression works when more than one factor affects business performance. For this method to show accurate results, one dependent and other independent variables must have a linear correlation. Moreover, the independent variables should not be closely related to make it easier to identify the factor that affects the forecast variable.
Qualitative forecasting is more subjective but provides deeper insight into factors that can’t be predicted using historical data.
- Delphi method involves getting expert advice on subjective matters such as market conditions, usually with the help of a questionnaire. When the opinions are collated and reviewed, second or multiple rounds of consultations occur with another set of experts until there is a consensus.
- Market research is essential for organisational planning and especially helpful when there is no historical data, such as for start-ups. It gives a comprehensive view of the market based on competition, market fluctuations, customer behaviour, and other external factors affecting the industry.
Financial forecasting versus financial modelling
Often confused with each other, financial forecasting and modelling are two different activities carried out by a business with the same goals of knowing the future of business growth. If financial forecasting is the ‘why’ of financial planning, then financial modelling is the ‘how.’ It involves creating models of probable scenarios and looking at how financial metrics will perform under hypothetical circumstances. Businesses use different forecasting models such as top-down, bottom-up, Delphi, correlation-based, statistical, and asset and liability management.
A quick guide to financial forecasting
- Define the purpose of the financial forecast as it helps to determine the metrics and factors to consider while carrying out the forecast. The goals could range from estimating revenue to understanding how a merger might affect the bottom line.
- Gather accurate historical data such as revenue, losses, liabilities, investments, equity, expenses, fixed costs, and other numbers relevant to the forecast.
- Choose a time frame for the forecast. Financial forecasts typically show greater accuracy in the short term than in the long run. Usually, businesses carry out projections for one fiscal year.
- Select a forecasting method based on the purpose and available data.
- Monitor the results to understand how financial forecasting has reflected the latest developments and document the changes and deviations to understand the factors causing the changes.
- Analyse the data to know if the forecasts have been accurate. Continuous financial management assists in understanding the current performance and preparing the following year’s financial forecasting.
- Repeat the process according to the decided time frame. It helps in collecting, recording, and analysing data for accurate financial forecasting in the future and staying in control of your business.
For organisations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organisational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organisations that are innovating collaboratively for the future.
How can Infosys BPM help?
Financial forecasting and modelling can be a tedious yet sensitive process that requires in-depth expertise and accuracy. The simplified end-to-end digital finance solutions of Infosys BPM improve the accuracy of F&A processes and metrics through automation, AI/RPA and data analytics. The comprehensive solutions help clients align and enhance their enterprise capabilities with scalable finance technology. | <urn:uuid:c71f2453-3048-4aea-8da4-b309c580c72e> | CC-MAIN-2022-40 | https://www.infosysbpm.com/blogs/finance-accounting/guide-to-financial-forecasting-for-business-growth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00704.warc.gz | en | 0.917721 | 1,454 | 3.203125 | 3 |
Ever wondered how roaming will work on 5G networks? Will it be any different to 2G, 3G and 4G roaming? Will the broad range of new spectrum being deployed for 5G be an issue? This article will to cover roaming in broad manner and how it will be conceptualized in a 5G Network.
What is roaming?
Roaming is a critical and popular feature of mobile telecommunications that allows subscribers to use their mobile services outside of their home networks, i.e. their service provider’s coverage area. Services can be making or receiving calls, mobile data usage and supplementary services like call forwarding, etc.
Roaming provides additional revenue streams for operators as they can connect outside visitors to their network as well as allowing their home subscribers to use a visitor network.
The roaming concept has already been utilised by 2G/3G/4G subscribers that have enjoyed this feature worldwide.
How roaming works
Roaming agreements and connectivity (of signaling and bearer) between the operators are required to enable this feature. Connectivity can be direct between the operators (e.g. between neighboring countries) or it can be via and International Hub (e.g. generally when countries are geographically far away). For obvious reasons, operators usually maintain roaming agreements with more than one roaming partner. This allows reliability as well as robustness in the roaming connectivity offered to their home subscribers.
Concepts of Inbound and Outbound Roamers
Home Subscriber activates the Roaming Pack based on the Region/Country of Visit.
Roaming Pack gets activated in the Subscriber’s package.
Home Subscriber moves to the Roaming Country and Switches ON the Phone.
Subscriber chooses either of the 2 Options to register to the Network of Roaming Operator; choose mobile network automatically or choose operator manually from the list.
The Roaming Operator checks the identity of the Subscriber before it latches it on the Network. It verifies whether the Foreign Subscriber has subscribed to Roaming services or not.
Roaming Operator checks with Home Subscriber’s HLR or HSS using Subscriber’s IMSI and downloads the profile in its Visiting Register (E.g. VLR of Roaming Operator).
From the Subscriber’s Subscription Profile, it knows which all Mobile Services are to be allowed in its Network.
The Visiting Network updates Home Operator’s HLS/HSS with the current roaming location of UE.
As the UE is latched on the Roaming Network, Subscriber can start consuming the Voice & Supplementary Services using the resources of Visiting Network.
Additionally, for Data Roaming, GRX (GPRS Roaming Exchange) / IPX (IP Exchange) backbone comes into the picture.
This backbone enables the IP Connectivity between the Roaming Partners and allows IP data routing. Operators also utilize GRX/IPX connectivity for negotiating SLAs as per the required QoS during Roaming operations.
Other infrastructure related to Voice Service Connectivity (of Signaling and Bearer) remains intact for 2G/3G Operators.
4G is a complete IP-based network. So, the UE or Subscribers needs to have the IP Connectivity throughout the time it is latched on the 4G Network. The IP Connectivity can be obtained either from Home Network or Visiting Network.
Roaming approaches in a 5G Network
Before going to below section, please refer to the article on Basics of 5G Architecture where I have explained the basics of related 5G Network Functions and Service Based Architecture (SBA).
Local-Breakout: In Local Breakout, data traffic is routed directly from the Visiting Network (VPLMN) to the Data Network while authentication and handling of subscription data is handled in the Home Network (HPLMN). Basic roaming Policy and Charging is applied by the Visiting PCF and CHF as per roaming agreements. In this case only signaling data is routed to Home network. Here, the IP Address is obtained from the Visiting Network. It means that the UE is using Radio, SGSN (for GPRS/3G) / SGW (for 4G) / AMF/SMF (for 5G) and GGSN (for GPRS/3G) / PGW (for 4G) / UPF (for 5G) of the Roaming Network for the connectivity.
Home-Routed: In this case, the IP Address is obtained from the Home Network. Here, UE uses Radio & SGSN (for GPRS/3G) / SGW (for 4G) / AMF/SMF (for 5G) of the Roaming Network and GGSN (for GPRS/3G) / PGW (for 4G) / UPF (for 5G) of the Home Network.
In Home Routed, the Visiting Network data traffic is routed to Data Network via Home network. It provides more control to the Operators wrt offering roaming services, policy and charging the subscribers. However, it adds extra layer of complexity and lag in the network. Along with the signaling data, bearer data is also routed to the Home network.
In 5G Roaming, SEPP (Security Edge Protection Proxy) acts as a service relay between VPLMN and HPLMN for providing secured connection as well as hiding the network topology. You can compare its functionalities as similar to SBC (Session Border Controller) when Voice packets are routed from Core network to IMS network for VoLTE service.
Simplified View of Roaming Usage and Settlements
For the purpose of simplicity, 3rd Party Clearing House is not mentioned.
Roaming Settlements happen at the Operator’s level by exchanging roaming records for Inbound and Outbound Roamers. In general, these records are mediated via Clearing/Settlement House in the format agreed for the information exchange (E.g. TAP records) between the Roaming Partners.
Home Operator charges the Subscriber for Roaming via its Retail Billing. Operators sell the Roaming packs to their Subscribers to enable the Charging & Usage. Depending on the Regions or Countries, Usage Charges (e.g. Incoming Calls, Outgoing Calls, Data Usage, etc.) varies and you can find the Roaming Charges in the Telecom Operators websites.
It goes without saying that there was quite a no. of advancements made in this field for easing the use and reducing the costs. Roaming has also advanced to remove dependencies on factors like Vendor providers, Devices, Mobile Generation used, etc. between Home & Visiting Networks. Operators are upgrading their traditional OSS/BSS systems to prevent Bill Shocks, perform realtime charging of Roaming records, etc. and also trying to ease the Roaming Operations & Settlements with the technologies like eSIM, Multi IMSI SIM and Blockchain. | <urn:uuid:bee94bcb-7809-4ef3-a948-4ff21a4fcea3> | CC-MAIN-2022-40 | https://disruptive.asia/roaming-with-5g-whats-the-catch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00704.warc.gz | en | 0.91608 | 1,454 | 2.578125 | 3 |
According to a recently released report, Germany is not adequately equipped to prevent terrorist attacks in its nuclear plants.
According to the Deutsche Presse-Agentur (DPA) news agency, the report was presented by Oda Becker, an independent expert on nuclear plants.
This is of course extremely distressing, especially in the light of the recent tragic events in Belgium with substantial casualties.
The report was brought to public attention at the German Federation for the Environment and Nature Conservation (BUND) Congress, where concerns were expressed towards protecting citizens from catastrophic consequences of another terrorist attack.
When an aircraft is about to collide, there is little that can be done from the defensive line of the nuclear plants to prevent the inevitable.
The same level of threat is expressed through the option of helicopters filled with explosives. There is nothing to prevent such acts, causing a massive destruction and severe radiation flowing everywhere.
Terrorism is one of the major threats to the industry of nuclear plants, making these facilities one of the most prestigious targets to focus on.
“A serious accident is possible in case of every German nuclear plant,” Becker explained in a separate study published on March 8 and titled “Nuclear power 2016 – secure, clean, everything under control?”
Becker considers insufficient security standards, natural disasters, terrorist attacks and emergencies caused by the deterioration of the German nuclear plants’ security systems as major threats to the industry.
“there are no appropriate accident management plans.” she added Becker. “The interim [nuclear waste] storages lack protection against aircraft crashes and dangers posed by terrorists,” Becker said,
The media in Belgium concentrate on the initial thoughts of the terrorists to hit the nuclear plants. If it weren’t for the arrest in Paris, these thoughts would have been made reality and the casualties would have been even greater. Dernier Heure, a newspaper from Belgium, revealed that the terrorists had planted a camera in front of the house of the director of the Belgian nuclear research program. In this way, they had gained a lot of information.
All these events have made a lot of people skeptical as to the importance of shutting down nuclear plants. The head of BUND, Hubert Weiger, has said:
“It is even more necessary than ever to abandon this technology,” and this thought reflects the opinions of thousands in Germany, Belgium and Europe altogether.
AP has reported that IS (or ISIS) has been training hundreds of people especially for external attacks and this would be a threat beyond any control. About 450 people are specials in creating bombs, deteriorating the situation for Europe.
If people in Germany and Belgium do not take immediate actions, who knows what can happen next?
Ali Qamar is an Internet security research enthusiast who enjoys “deep” research to dig out modern discoveries in the security industry. He is the founder and chief editor at Security Gladiators, an ultimate source for cyber security. To be frank and honest, Ali started working online as a freelancer and still shares the knowledge for a living. He is passionate about sharing the knowledge with people, and always try to give only the best. Follow Ali on Twitter @AliQammar57.
(Security Affairs – Terrorism, nuclear plants) | <urn:uuid:250b93ed-fd22-4b4e-a412-d713cde6365a> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/45724/breaking-news/germany-nuclear-plants-terrorism.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00704.warc.gz | en | 0.957545 | 679 | 2.765625 | 3 |
Depending on how you want to categorize them, there are several different types of APIs, and they have different scopes, benefits, and intended audiences, which makes each of them uniquely suited for different purposes.
API stands for Application Programming Interface. APIs contain a collection of actions (or requests and responses) that developers can access. The API also explains what it accomplishes, like “Save as” for example. Finally, the API contains the information developers need to structure those requests and responses.
It sounds complicated, but breaking all of it down can help. So, what are the different types of APIs available? Let’s take a look at how they differ.
Four types of APIs
APIs come in different shapes and sizes, giving developers the flexibility to choose the type of APIs that best suits their purposes. A popular distinction is to categorize them by their intended audience, which gives us the following three categories: Open APIs, Partner APIs, and Internal APIs. I’ll also add a bonus category, Composite APIs, which doesn’t quite fit neatly into any of these categories.
Of course, there isn’t just one way to categorize APIs: you could sort them by business use, or by vertical or technical type, or also by protocol (SOAP, REST, Async, GraphQL, etc.). Today, let’s talk about types of APIs by audience.
Public APIs, also called external or open APIs, are publicly available to developers and other users with minimal restriction. They may require registration, an API Key, or OAuth. Some may even be completely open – in fact, while the terms public and open are often used interchangeably, not all public APIs are open. (And to make matters even more confusing, Open API and OpenAPI are two different things!) When we look at them in terms of intended audience, public APIs focus on external users to access data or services.
What are partner APIs?
Partner APIs are APIs exposed by/to the strategic business partners. They are not available publicly and need specific entitlement to access them. Like open APIs, partner APIs are the tip of the iceberg because they are the most visible ones and are used to communicate beyond the company’s boundaries.
They are usually exposed to a public API developer portal that developers can access in self-service mode. While open/public APIs are completely open, there is an onboarding process with a specific validation workflow to access partner APIs.
How do internal APIs work?
Internal APIs, aka private APIs, are hidden from external users and only exposed by internal systems. Internal APIs are not meant for consumption outside of the company but rather for use across internal development teams for better productivity and reuse of services.
A good governance process involves exposing them to an internal API developer portal that connects to the internal IAM systems to authenticate and allow users to access the right set of APIs. And as François Lascelles points out, the distinction between internal/external, private/public can be cause for grief when it comes to security, which is why zero trust – treating all APIs as if they might be exposed – is a stronger approach to API security.
Why you might need a composite API
Composite APIs combine multiple data or service APIs. They are built using the API orchestration capabilities of an API creation tool. They allow developers to access several endpoints in one call. Composite APIs are useful, for example, in a microservices architecture pattern where you need information from several services to perform a single task.
Data and service APIs
Beyond the difference between internal, partner, open, and composite APIs, we should mention another approach to categorizing APIs:
- Data APIs provide CRUD access to underlying data sets for various databases or SaaS cloud providers. These APIs are needed to serve some fundamental data coming from SaaS applications, with help from SaaS connectors or internal data stores. Legacy portals, for example, where the login and password are saved in the web.config file, are one of the most common examples.
- Internal service APIs expose internal services, reflecting parts of internal processes or some complex actions.
- External service APIs are third-party services that can be embedded in the company’s existing services to bring additional value.
- User experience APIs leverage composite APIs to help app developers provide the right experience for each device type (desktop, mobile, tablet, VPA, IoT).
As you can see, there are many options available.
Types of API protocols
To leverage these different types of APIs, we must follow specific protocols. A protocol provides defined rules for API calls. It specifies the accepted data types and commands. Let’s look at the significant protocol types for APIs:
1. REST API
REST (short for Representational State Transfer) is a web services API. REST APIs are crucial for modern web applications, including Netflix, Uber, Amazon, etc. For an API to be RESTful, it must adhere to the following rules:
- Stateless—A REST API is, by nature, a stateless Client-Server Architecture
- Client-Server—The client and server should be independent of each other. The changes you make on the server shouldn’t affect the client and vice versa.
- Cache—The client should cache the responses as this improves the user experience by making them faster and more efficient.
- Layered—The API should support a layered architecture, with each layer contributing to a clear hierarchy. Each layer should be loosely coupled and allow for encapsulation.
APIs play a vital role in the development of any application. And REST has become the preferred standard for building applications that communicate over the network.
REST fully leverages all the standards that power the World Wide Web and is simpler than traditional SOAP-based web services. Unlike other APIs, it allows for a loosely coupled layered architecture to easily maintain or update them.
2. SOAP API
SOAP (simple object access protocol) is a well-established protocol, similar to REST in that it’s a type of Web API.
SOAP has been leveraged since the late 1990s. SOAP was the first to standardize the way applications should use network connections to manage services.
But SOAP came with strict rules, rigid standards were too heavy, and, in some situations, very resource-intensive. Except for existing on-premises scenarios, most developers now prefer developing in REST over SOAP.
3. RPC API
An RPC is a Remote Procedure Call protocol. They are the oldest and simplest types of APIs. The goal of an RPC was for the client to execute code on a server. XML-RPC used XML to encode its calls, while JSON-RPC used JSON for the encoding.
Both are simple protocols. Though similar to REST, there are a few key differences. RPC APIs are very tightly coupled, making it difficult to maintain or update them.
To make any changes, a new developer would have to go through various RPCs’ documentation to understand how one change could affect the other.
4. Event-driven APIs, aka asynchronous APIs
In the last several years, event-driven APIs have gained steam because they offer an excellent solution for some specific pain points and use cases in our always-on, data-heavy world.
Event-driven APIs differ from REST APIs because of the way they transmit information in quasi real-time. They are particularly helpful in cases like stock market trackers, which require constantly updated data, or IoT devices which monitor real-time events. For this type of data, using a REST architecture would require constant and onerous back-and-forth requests to a server – much like a child asking “are we there yet?” in the backseat of the car on a road trip.
An event-driven architecture (EDA) allows the source to send a response only when the information is new or has changed. There are a few ways to achieve this result, and some popular event-driven API interaction patterns are Webhook, Websocket, and streaming.
APIs are digital building blocks for your business
Regardless of what types of APIs you use, they are game changers because they serve as building blocks for modern digital solutions.
Packaging up discrete digital capabilities as APIs makes it possible to recombine things more quickly, giving companies the flexibility to build new products and services out of existing APIs, contribute new capabilities as building blocks to the platform, and improve solution space by making all capabilities available for reuse.
There’s no way around it: APIs are critical to your business. They’ll allow you to integrate new applications with your existing software. They allow you to innovate without changing or rewriting code. They act as a gateway between systems which will enable you to expand the digital experiences you offer your clients at any given time.
And crucially, with the right business vision for your APIs, they can drive extraordinary results.
Discover our guide to navigating the API landscape so you can create brilliant digital experiences, reach new markets, and achieve goals for digital business growth. | <urn:uuid:6f94f563-6e00-4820-876e-71ca93aa2335> | CC-MAIN-2022-40 | https://blog.axway.com/learning-center/apis/basics/different-types-apis | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00704.warc.gz | en | 0.923716 | 1,938 | 2.71875 | 3 |
Face recognition is an advanced feature on specific home security cameras that allows you to create a database of people that come to your property frequently. Whenever the camera detects a person’s face, it examines to see if it matches one of the faces on your list of recognized faces.
If the face recognition system cannot determine who will be at the doorway, it will alert you someone unknown has accessed your place. The Solar-powered surveillance camera advanced facial recognition software detects known faces automatically, enhancing security and reducing false alarms.
Depending on many factors, such as sunlight and hairdo, the system can measure differently whether you wear sunglasses a day or not the next. On the other hand, Artificial intelligence is getting better day after day, so the more face information that enters the system, the smarter the facial recognition software becomes.
Though one point is certain: this feature has become more widespread in these products, not only in home security cameras but also in cellphones and as productivity aids for airport check-in automation. As law enforcement invests more in facial recognition technology, as police enforcement invests more in face recognition technology, serious questions about surveillance, privacy, and civil rights raise across the board, prompting calls for government oversight.
Facial recognition already uses in smartphones
Smartphones can unlock simply by displaying your face. Facebook and Google Photos, for example, can identify who is in a photo. Surveillance is the next step in facial recognition, and it’s already available on specific home cameras and in some commercial security operations.
But first, let’s consider the consumer market. People who like to be at the forefront of the innovation of smart home development continue to be intrigued by the idea of merging surveillance cameras with face recognition.
What is the importance of facial recognition in home security?
A face recognition technology uses an image or video to map the shape of your visual appearance, including the distance between your jawline and your skull. A “face print,” which is similar to a fingerprint, is what this calls. After that, the information is kept and compared to a database of millions of other people’s faces.
Security applications for facial recognition are precise. A security team can respond quickly to suspected, aggressive, or forbidden persons using a surveillance system with advanced facial detection.
A powerful video management system, such as SAFR, can put the pieces together after a crime or security breach.
In terms of home security, face detection in security cameras may create a personal database of guests who visit your home frequently. In this way, your camera can detect if someone who shouldn’t be on your property is?
When it comes to essential facial recognition, the Nest Hello, Nest Cam IQ Indoor, and Nest Cam IQ Outdoor come out on top by a wide margin. Because it’s the cheaper of the three and has the highest possibility of giving you essential information over who’s outside your front entrance, I suggest the Nest Hello for facial recognition.
The Nest IQ Indoor Camera could even inform you who is still inside the house. However, the Hello and the IQ Outdoor Cam can tell you who else is outside. The eye-level location of the Hello doorbell gives it the best opportunity to monitoring and seeing the most guests.
The facial recognition technology used by the Hello and other face-tracking Nest cams is not accessible. To use facial recognition, you’ll need to join up for the Nest Aware cloud subscription service.
The Nest Hello is still a strong contender for the best video doorbell overall. If you are using face recognition software or not, it’s a win scenario.
Tend Secure Lynx
The Tend Secure Lynx is about $60. Given this, I was suspicious of the camera’s capacity to deliver, but it did. Not only does the camera work admirably and come with a slew of advanced features, such as free seven-day occasion video clip storage, but it also comes with free face recognition software.
Once you’ve built up a database of known faces, the Lynx will take command. There is a slight learning curve as it gets used to each face, but this is an excellent option if you’re looking for a low-cost interior security camera with good facial recognition.
Nest Cam IQ Indoor
The Nest Cam IQ Indoor is a bell with comparable functionality as the Nest Hello. If you subscribe to Nest Aware, it has face detection and can accurately inform you who walks at the front of the camera’s range of view.
It does, though, come with so many additional benefits. Because it’s an indoor camera, Nest included a Google Assistant speaker. That means the camera can act as a Google Home speaker, answering basic questions like the weather or traffic in your neighborhood, as well as controlling many Google-Assistant-enabled intelligent home gadgets.
At first glance, the Netatmo Welcome appears to be an expensive camera. However, considering that this camera can identify known faces (rare) and offers several storage options, including free Dropbox online storage and internal SD card storage, the pricing becomes hugely appealing.
Image quality meets 2021 standards with Full HD 1080p resolution. Night vision and day vision provide a crisp, detailed image that allows you to see faces and zoom within the camera’s field of view.
How to test cameras?
When you test a camera with a facial recognition feature, you create specific people profiles by either taking a picture of them in real-time and uploading it to the system or using an existing photo of them. As a result, the face recognition camera must be capable of recognizing human characteristics in different sorts of movement and compare them to your collection of identified faces. You’ll get an alert saying the camera found “John,” “Anna,” or anyone in your database if it’s working correctly.
Receiving an alert when your children return home from school or when a dog walker or family care comes are just a few examples of applications for this type of functionality. It gives you peace of mind when you’re expecting someone to arrive and want an automated alert to let you know they’ve come (especially if you’re not home to greet them).
Yet, because cameras can differentiate between faces it knows and those it can’t understand, it can be helpful in security scenarios. Assume your camera notifies you to the presence of someone on your front doorstep or entering your home. Even yet, rather than trying to filter through lots of general movement warnings to discover the activity, you don’t recognize them. Within this situation, in the event of a break-in or theft, you may send information to law enforcement officers more immediately.
The best way to test a camera with facial recognition is to create a database, what we do. Just enter people into your database and trust the camera with the rest. It’s best to give these cameras at a minimum only a few days as, even though they observe faces from multiple angles, some of them improve significantly in a short time.
Then it’s an issue of determining how effectively the camera recognized faces in the first place. How many times did it properly recognize my face as opposed to someone else’s? How did it fare when viewed from multiple perspectives and with varied hairstyles and wardrobe accessories? Was it possible for the camera to recognize faces at all? Even those who claim to have facial recognition software have problems spotting faces and instead designate the behavior as a severe motion alarm.
What about errors and privacy act?
While facial recognition technology has shown to be effective in the past, it has received mixed reviews. For one thing, it has proven to be untrustworthy. In 2019, the Metropolitan Transportation Authority of New York City conducted a large-scale experiment that was a failure. The present generation of home devices can be fooled or make mistakes (for example, they might not recognize Grandma without her spectacles or the babysitter with a different hairdo).
Perhaps more crucially, they call into doubt a person’s right to privacy. For instance, the Shelby American Car Collection in Las Vegas has now adopted the SAFR system. The museum’s technical director, Richard Sparkman, commended the system’s security features and its potential as a customer data collector.
Face recognition also poses ethical concerns to the home, as Molly Price of CNET points out. On the internet and in law, the collection of data, particularly biometric data collection, is a contentious subject. Many people believe that customers and private citizens can choose how their personal information is using and retained. Face recognition in public and private security cameras (that keep data in the cloud or on a remote server) would be a breach of that right.
Facial recognition is only a tiny part of the artificial intelligence technology that is about to become commonplace. We’ll have to decide whether safety and convenience should take precedence over privacy and individual liberty in our homes and the public. | <urn:uuid:4fed4949-b362-4147-b6c7-8af66252af50> | CC-MAIN-2022-40 | https://www.bayometric.com/surveillance-cameras-used-for-facial-recognition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00104.warc.gz | en | 0.937384 | 1,873 | 2.734375 | 3 |
Quantum Internet Initiatives Around the World
(PrivateInternetAccess.Blog) Efforts around the world are being made to create quantum networks, and ultimately a quantum Internet. It’s interesting how quickly what was regarded by many as extremely speculative work on quantum computing is turning into a practical solution to one of today’s most pressing problems – the preservation of privacy as data is transmitted across networks.
The USA’s DOE’s Office of Advanced Scientific Computing Research hosted a Quantum Internet Blueprint workshop in February this year to define a potential roadmap towards building the first nationwide quantum Internet. The workshop’s report explains: “The quantum Internet’s ability to deliver the ultimate in secure communication would be a central application. While Quantum Key Distribution currently is the main research focus that underpins secure quantum communication, it is the information exchange over a quantum channel – with its ability to detect any interception – that offers the ultimate in secure communication. Early adopters for such future solutions will be found in areas such as national security, banking, and energy delivery infrastructure. . . . ”
The US is not the only country ramping up its efforts to build a quantum Internet. In Europe, there is the European Quantum Internet Alliance. Since the quantum Internet does not exist yet, it is hard to write applications for it. To get around this problem, the European Quantum Intrnet Alliance has created a quantum Internet simulator, called SimulaQron, which is freely available for anyone to download and use:
SimulaQron provides a distributed simulation of several quantum processors, connected by a simulated quantum communication channels. Each local quantum processor is accessible via a server running on classical computer. In the background, SimulaQron will connect these servers using classical communication to simulate the exchange of qubits and the creation of entanglement between distant processors. . . .”
Japan is working on a quantum network that will link 100 quantum devices and 10,000 users around the world. The lead contractor is Toshiba, whose research laboratory in Cambridge has already created a small metropolitan-scale quantum network in the UK, which runs over ordinary fiber optic cables.
China is and has been very active in this area. Back in 2017, it built the Quantum Secure Communication Beijing-Shanghai Backbone Network, the world’s first long-distance quantum-secured communication route. According to Jianwei Pan, professor at the University of Science and Technology of China, and one of the leading researchers in the field of quantum networks, in 2018 there were more than 150 users of the Chinese network. Applications included banking, financial services and insurance. | <urn:uuid:79327cf1-55ca-4ce9-9d90-ff5f89431cf3> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-internet-initiatives-around-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00104.warc.gz | en | 0.923433 | 538 | 2.734375 | 3 |
Harold Kilpatrick, cybersecurity consultant
Hopefully, you’re already aware of how important your privacy and security is online. Cybercrime is growing year on year and is a huge threat to not just big businesses or major targets but also personal internet users like you. If you’re not taking your online security seriously, you should be.
The reality is that even if you think you’ve taken the right steps to protect your data online you still might not be 100% secure. That’s especially true when you consider how rapid and radical technological changes are. It can be hard to keep up.
There’s one other thing making it difficult to stay secure: misinformation. There are tons of advice and info from so-called “experts” that actually turn out to be nothing but myths. You’d be surprised how many people still believe outdated or completely false information about online security. Let’s make sure you’re not one of them.
Private browsing modes really keep you secure
Do you know those private, or incognito buttons your browser has? They might not be as secure as you think. These aren’t meant to be high-end privacy features; they’re just simple options to keep your history clear and prevent sites from tracking you easily with cookies.
That doesn’t mean these modes aren’t useful—they still are. But they won’t keep you super-secure if that’s what you’re using them for. Sites will still be able to track your IP address and collect other data even if they can’t send you cookies.
If your Facebook profile is private, it’s completely secure
Facebook doesn’t have a “one-fix-all” feature for ultimate security. There are actually tons of different settings designed to give you as much control over your privacy as possible. Even so—however private you set it—certain info like your name and gender will always be visible to everyone. Not only that, but any apps you add might will be granted extended access to your friends list and other information.
It’s important that you take your Facebook security seriously and take time to go through all the settings carefully. Try checking your other social media accounts too. It’s not just Facebook that could be a risk for your privacy.
Have a look at this for a few tips on tweaking your Facebook privacy.
All VPNs are absolutely secure
You might think that if you’ve got a VPN, you’re completely secure – but that simply isn’t the case. Some VPNs have stronger encryption protocols than others, so it’s important that you shop around for the right one.
Where your VPN is located is also important. Some countries require VPNs to collect and store certain information about users as well as log browsing data. If you want a VPN that doesn’t keep any of your usage data or history, you need one with a no-logging policy, based in a country that requires no data retention.
You’ll often find that many free VPN services aren’t as secure as they might seem. Yes, you’ll be able to switch your browsing location or appear as if you’re visiting sites from another country, but all your usage on their network might not be secure. That’s especially true if they sell your data or insert ads to make money. You’ll also want to make sure the VPN you choose has the best encryption facilities and other features to keep you as secure as possible.
Encrypted traffic is completely secure
While encrypting your own traffic and data is important, you can’t be sure about the security of any other website once the sensitive information leaves your device. In other words: you might have kept your password for a certain account secure, but how do you know that actual site is safe itself?
You’ve probably seen cases where sites have been hacked, and users’ information have been compromised. These weren’t stolen from individual users – they were taken from the actual main site in question, with the data of thousands (or millions) of users stolen.
How can you protect yourself against this? Firstly, only sign up to sites you trust. If you aren’t sure, then use a throwaway email address or made-up details. Don’t use the same email address for random offers and sites that aren’t important as you do for the important stuff.
It can be difficult to manage but use as many different passwords as possible. You don’t want one site’s inability to keep your info secure to suddenly give a hacker that password you’ve been using for every site.
Privacy is guaranteed with a VPN and antivirus
While a VPN and anti-virus are important first steps towards securing your privacy—they won’t keep you 100% secure. It’s important you remember that. First, you need to make sure your VPN is as private and secure as you think they are. Second, keep your antivirus up to date. New threats emerge constantly so it can be difficult staying ahead of the curve.
Unless I’m a millionaire or a big corporation, nobody wants to hack me
You might think that hackers only want to hack into wealthy people’s bank accounts, but the opposite is often true. Wealthy people and businesses can afford to hire cybersecurity experts or cyber-crime investigators to make hacking them a headache. Many hacks involve collecting the data of thousands of people and seeing whose bank accounts the hacker can access. If you’re one of fate’s unlucky ones, they’ll try to take your money, no matter how much or how little you have.
Smartphones are safe from being hacked
Android smartphones are safer than PCs and iPhones are even safer than Androids. No! Hackers keep up with the latest technology, so they can definitely get malware into your smartphone if you’re not careful. In fact, sometimes you can be even more vulnerable because smartphone users often connect to public Wifi networks. Insecure public Wifi is one of the most dangerous things to connect to from any device, and you’re more likely to do it from your smartphone. Watch out!
Hopefully, we’ve now busted a few internet privacy myths, so you can start taking your security more seriously.
Harold Kilpatrick is a cybersecurity consultant who also freelances as a blogger. Harold lives in New York. | <urn:uuid:cd9dedc1-7215-4bae-b063-a5643ef09c23> | CC-MAIN-2022-40 | https://bdtechtalks.com/2018/06/08/privacy-myths-challenges-online-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00104.warc.gz | en | 0.945964 | 1,370 | 2.609375 | 3 |
ANTS, a PANDA and a ROACH are talking to each other in a Lab.
It may sound like the beginning of a children’s story, but it’s a real scenario at our 5G Labs in Austin. And these first-of-their-kind, high-tech critters are helping us explore the vast and unchartered potential of 5G.
Why would we want ANTS in our lab?
When we began working with millimeter wave in 2011, little was known about how mobile networks would perform in such high frequency bands. We had deep knowledge of traditional cellular spectrum from decades of experience spanning four generations of our mobile wireless network, as well as its primary applications in talk and text. But mmWave was fundamentally different. From the way it interacts with the environment to the design of its physical layer and antenna hardware, mmWave is a whole new frontier. And so is its potential.
Millimeter wave’s ultra-high speed and capacity and ultra-low latency, particularly when combined, can enable applications that would’ve been unimaginable with previous mobile networks. But to unleash its potential, we had to better understand how mobile 5G networks would operate in this spectrum. We needed a way to watch it in action before it was deployed.
So, as we’ve often done in our 143-year history, we became pioneers. We built a first-of-its-kind mmWave prototyping and testbed platform.
We debuted our proprietary Advanced 5G NR Testbed System (ANTS) in 2018. It’s a 5G RAN solution designed and built entirely in-house at our 5G Labs in Austin using white-box, off-the-shelf hardware components, as well as modular software that aligns with our O-RAN Alliance initiatives. Although developed for two-way communication, the system can also be used for sensing radio waves through channel sounding. This flexibility enabled an important first step in understanding mmWave – being able to map it.
Channel sounding is like a sonar that uses radio waves instead of sound waves. It allows us to create a detailed 3D heat-map that shows where signals are coming from. Based on this data we can model the mmWave channel, a technique widely used to design equipment and 5G standards protocols as well as plan and optimize 5G networks.
One of the first channel sounders we built, affectionately called “the porcupine” due to its spikey horn antenna array, tracks and measures how mmWave signals interact with objects like trees, buildings, cars and people. When we built it in 2017, it was one of the world’s fastest omni-directional channel sounders, capable of taking 360 measurements every 150 milliseconds. Having these insights was key in moving our early experimentation from the lab to field trials, long before we launched our commercial 5G network.
The more we learned, the better ANTS solutions we were able to build.
As our first-generation testbed system, the porcupine’s size and weight weren’t ideal for transporting and testing in most mobile environments. So, we created a smaller solution called the Path-Loss and Network Data Analyzer (PANDA).
PANDA was our first truly portable channel sounding unit and has been useful in exploring mmWave in the vehicle-to-vehicle (V2V) channel. Little is known about how mmWave will interact between two vehicles moving at high rates of speed, so we used PANDA to take some of the world’s first V2V mmWave channel measurements. Many of the resulting models we’ve developed have been adopted as industrywide 5G standards and are providing important insights into V2V mmWave frequencies that will open up exciting use cases for 5G, including self-driving cars.
Even with PANDA’s advanced capabilities, we saw opportunities to further improve the reliability of our models with even more data, and developed an even faster, more portable and more powerful unit. This is our Real-Time Omni-Directional Channel Sounder (ROACH).
ROACH was designed as part of our outdoor testbed, to be mounted on a vehicle as a zero-touch solution. It has multiple cameras, GPS and four phased arrays that continuously sweep for mmWave beams and collect up to 6,000 measurements per minute as we drive through target areas. We have plans to synch this data with VR goggles, which will enable us to do things like explore network resources, identify the source of poor or unexpected signal strength or identify optimal locations to place network equipment.
As we continue to deploy 5G, having ROACH installed on our vehicles in the field could enable us to monitor network resources and explore the 5G ecosystem more deeply than is practical or imaginable right now. And that’s just the beginning of the possibilities.
Currently, we’re focused on building a more expansive outdoor testbed, where we can collaborate with other industry leaders on the opportunities around vehicle-to-everything (V2X) applications. And we’re looking at indoor testbed expansions as well, and the possibility of connecting ANTS to new equipment to explore new ways mmWave can be applied in customer environments.
The importance of ANTS.
What we’re learning through ANTS is helping lay the foundation for 5G. Not just at AT&T, but across the industry.
As we discover and innovate around mmWave, we’re sharing key findings with the organization setting the global specifications for 5G, the 3rd Generation Partnership Project (3GPP). Some of those insights and techniques have already been incorporated into the 3GPP standards that are providing the foundation of how 5G will be built industrywide. These contributions are possible because ANTS is providing the understanding we need for innovation to flourish.
ANTS is currently enabling our team to explore new concepts in network coding, multi-point transmission and compressed beam scanning that could allow mmWave to provide ultra-reliable communication at high mobility speeds. Conventional wisdom has been that mmWave may not be the ideal choice for reliable mobile solutions given some of the characteristics of its signals. At high frequencies, signals can be hindered by obstacles and tend to travel short distances. But where we see challenges, we also see opportunities to innovate. And ANTS is where these pioneering innovations take their first baby steps.
So we welcome ANTS in our lab, along with the porcupine, PANDA and ROACH. They’re helping us build the high performance, highly reliable 5G network we’re deploying today. And they’ll continue to provide the learning playground we need to explore the new customer applications of tomorrow. | <urn:uuid:760c96ac-193c-41e4-8d30-4d9b2dd8b993> | CC-MAIN-2022-40 | https://about.att.com/innovationblog/2019/06/ants_and_5g.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00104.warc.gz | en | 0.946845 | 1,398 | 2.765625 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Access Lists.
Access lists are one of the more difficult concepts to master for your Cisco CCNA exam. It is hard to simulate the different types of scenarios that may be covered on your Cisco CCNA exam in a lab so you can really get a handle on access lists. But it is not impossible. We are going to cover the major concepts that you will encounter on your Cisco CCNA exam below.
Access lists are used to filter network traffic on your Cisco router. This can be accomplished by using various different access lists implemented on different parts of your network to improve network performance and efficiency. You may see access used to implement QoS on a network to make sure your time sensitive data such as voice gets through while an email can be slightly delayed. Access lists can filter your routing protocols to control what networks are advertised. Additionally, access lists can be used to implement dial on demand routing and also to govern NAT or Network Address Translation for your Internet activities.
Packet Filtering Mechanism
On your Cisco router incoming or outgoing packets are compared to the access list from top to bottom until a match is found. Then an action is taken accordingly with no further comparisons. For security reasons, an implicit deny statement is added at the end of each access list. By default, if no match found, the packet will be dropped by that deny statement. You can only assign one access list per interface.
Two types of Access Lists
Standard: For IP, it filters traffic based on source address. A standard IP access list is placed as close to the destination is possible because filtering is based on source address.
Extended: Filters traffic based on source and destination address, protocols and port for IP. Extended IP access list are placed as close to the source as possible, because filtering is based on source & destination addresses.
Standard IP Access List Configuration:
Router(config)# access-list number (deny | permit) source_ip
Standard IP access list syntax.
Specifying a source: (host ip_address | any | wildcard)
This statement will deny a single host:
Router(config)# access-list 10 deny host 10.10.10.5
This statement will deny any host:
Router(config)# access-list 10 deny any (any = all hosts)
This statement will deny the entire 172.16.10.0 subnet
Router(config)# access-list 10 deny 172.16.10.0 0.0.0.255
This statement assigns access list 10 to an interface
Router(config-if)# ip access-group 10 (out | in)
This statement assigns an access list to a vty line.
Router(config-line)# access-class 10 (out | in)
Extended IP Access List Configuration
Router(config)# access-list number (deny | permit) protocol source
[port] destination [port] [log] syntax.
Specifying a port: (eq | gt | It | neq | range port_number)
The following denies telnet(port23) to host 172.16.10.5
The log statement tells the router to log messages to console every time the access list hit.
Router(config)# access-list 10 deny tcp any host 172.16.10.5eq 23 log
The following assigns access list 110 to an interface
Router(config-if)# ip access-group 110(out | in)
This statement allows IP traffic to pass before hitting the default implicit deny statement.
Router(config)# access-list 110 permit ip any any
We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career. | <urn:uuid:c18ac449-035c-4848-a154-042681362e36> | CC-MAIN-2022-40 | https://www.certificationkits.com/access-lists-cliff-notes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00304.warc.gz | en | 0.837295 | 847 | 3.40625 | 3 |
A team of scientists led by researchers at Stanford University have developed blood test to know premature births and pregnancies. The tests could help reduce problems related to premature birth, which affects 15 million infants worldwide each year.
The system can likewise utilized to appraise a baby’s gestational age or the mother’s expected date as dependably as and less expensive than ultrasound
Stephen Quake, PhD, professor of bioengineering and of applied physics at Stanford, shares senior authorship with Mads Melbye, MD, visiting professor of medicine. The lead authors are former Stanford postdoctoral scholar Thuy Ngo, PhD, and Stanford graduate student Mira Moufarrej.
Pregnancy view with High resolution
The tests measure the action of maternal, placental and fetal qualities by surveying maternal blood levels of without cell RNA. Little bits of the envoy atom that convey the body’s hereditary guidelines to its protein-production manufacturing plants. The group utilized blood tests gathered amid pregnancy to distinguish which qualities gave signals about gestational age and premature hazard.
An infant comes out less than three weeks early, influences 9 percent of U.S. births. It is the biggest reason for newborn child mortality in the United States. The biggest supporter of death before age 5 among youngsters around the world. In 66% of preterm births, the mother starts giving birth precipitously. Doctors for the most part don’t know why. The best accessible tests for foreseeing untimely birth worked just in high risk. The individuals who had officially conceived a prematurely, and were right just around 20 percent of the time.
Low cost Ultrasound
As doctors require better techniques for estimating gestational age. Obstetricians currently utilize ultrasound examines from the main trimester of pregnancy to evaluate a lady’s expected date. However ultrasound gives less dependable data as pregnancy advances. Making it less helpful for ladies who don’t get early pre-birth mind. Ultrasound likewise requires costly hardware and prepared professionals, which are inaccessible in a great part of the creating scene. Conversely, the specialists envision that the fresh recruits test will be straightforward and sufficiently modest to use in low-asset settings.
The gestational-age test was produced by concentrate a companion of 31 Danish ladies who gave blood week by week all through their pregnancies. The ladies all had full-term pregnancies. The researchers utilized blood tests from 21 of them to fabricate a measurable model. Distinguished nine sans cell RNAs created by the placenta that anticipate gestational age. Approved the model utilizing tests from the rest of the 10 ladies. The assessments of gestational age given by the model were precise around 45 percent of the time. The tantamount to 48 percent exactness for first-trimester ultrasound.
Premature Births a miracle
The researchers used blood samples from 38 American women who were at risk for premature delivery because they had already had early contractions or had given birth to a preterm baby before. These women each gave one blood sample during the second or third trimester of their pregnancies. Of this group, 13 delivered prematurely, and the remaining 25 delivered at term. The scientists found that levels of cell-free RNA from seven genes from the mother and the placenta could predict which pregnancies would end early.
The researchers need to approve the new tests in bigger accomplices of pregnant ladies previously they can be influenced accessible for far reaching to utilize. A blood test to recognize Down disorder that was created by Quake’s group in 2008 is currently utilized as a part of in excess of 3 million pregnant ladies for every year, he noted.
In any case, the researchers intend to examine the parts of the qualities that flag rashness to better comprehend why it happens. They likewise would like to recognize focuses for drugs that could defer untimely birth. | <urn:uuid:f2bd18e0-41b5-47aa-89d2-f4471898e4b3> | CC-MAIN-2022-40 | https://areflect.com/2018/06/09/blood-test-predicts-premature-births-accurately-in-mothers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00304.warc.gz | en | 0.953407 | 783 | 3.171875 | 3 |
The IT world is built around acronyms, terms, phrases, sayings and even though there is a purpose to the definitions the use of the lingo is use incorrectly more often than not. The main reason is the job postings, marketing publications, blog posts are written by people that do not have IT backgrounds. So on the surface interchanging one word make look the same but can change the definition greatly.
One of the more broad terms that is constantly flipped back and forth is in the security space and the terms Information Security and IT Security. On the surface its security and some places may be posting for IT Security folks and others are looking for Information Security folks but the job descriptions and responsibilities don’t match.
When you think about the people that are looking for new opportunities, reading blog posts for research or looking for new business partnerships, those people do have the IT background and experience. If you are not writing to them they will see that as a lack of knowledge, not take you seriously, and move on.
Here’s the best way to differentiate Information Security and IT Security.
IT Security – Also known as Computer Security or Cybersecurity is the protection of the physical information systems from theft, damage to hardware, software, and preventing the disruption of services. It includes physical access controls, network access controls, and operational controls. Simply – Technical and tactical.
Information Security – Also known as InfoSec is the practice of defending information from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction. Where the information resides is irrelevant, InfoSec scope is all information whether stored on a system or in physical/paper form. Simply – Procedural and Strategic.
|Information Security||IT Security|
|Security Frameworks||Hardware Hardening|
|Policy Development||Incident Reponse|
|Security Awareness Training||Firewalls|
|Risk Analysis||Vulnerability Scans|
|Data Privacy||Penetration Testing|
|Regulatory Compliance||Access Controls|
|Governance Models||Network Security|
|Enterprise View||System View|
As you can see Information Security and IT Security are not the same and when the terms are used interchangeably the experienced IT communities will see right through that.
It also comes into play from your Enterprise Security programs to understand where IT Security ends and Information Security begins as skill sets, processes and involvement from cross-functional departments benefit from knowing what’s the tactical technical approaches and what is procedural and strategic.
End of Line.
Binary Blogger has spent 20 years in the Information Security space currently providing security solutions and evangelism to clients. From early web application programming, system administration, senior management to enterprise consulting I provide practical security analysis and solutions to help companies and individuals figure out HOW to be secure every day. | <urn:uuid:4a673095-697e-40bc-82c1-5f4cf0ccca65> | CC-MAIN-2022-40 | https://binaryblogger.com/2015/12/20/information-security-vs-it-security-they-are-different/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00304.warc.gz | en | 0.91419 | 619 | 2.65625 | 3 |
Massive Open Online Courses, or MOOCs, are served up by a variety of companies, organizations and educational institutions, including Coursera, Udemy, Iversity, EdX and many others. Together, they’re garnering millions of students from around the world.
People involved in the MOOC business tend to be passionate about what their work means for students and for education.
Anyone who wants an education should be able to get it, Andrew Ng, cofounder and coCEO of Coursera, told TechNewsWorld.
“Online education in general, and MOOCs in particular, hold great potential for higher education,” Hannes Klpper, managing director of Iversity, told TechNewsWorld. “We strive to unleash this transformative potential.”
MOOCs in Context
MOOCs are an outgrowth of online education that has been offered in various forms by most educational institutions for more than a decade. The difference with MOOCs, however, is a matter of scale. Instead of 30 students taking an online course through a university, millions of students around the world can log in, participate, and in some cases even earn college credit for the course.
“MOOCs are a node in the trajectory of online and open educational resources and as such are not themselves changing the nature of higher education,” Michael Nanfito, executive director of the National Institute for Technology in Liberal Education, told TechNewsWorld.
“It is critical to contextualize consideration of MOOCs in the environment in which they have emerged and are evolving. MOOCs — in all their emerging iterations — will contribute to a pre-existing trend to integrate online technologies into the fabric of higher education.”
In With the New, but Not Out With the Old
Despite their wide accessibility and increasing popularity, MOOCs are not likely to replace traditional classrooms any time soon. Rather, they simply serve different populations and different educational goals.
“MOOCs and traditional classrooms are different,” said Ng. “One advantage of MOOCs is their convenience. Eighty percent of students already have a bachelor’s degree.”
MOOCs work particularly well for adult students, who might not be able to quit their jobs and head back to school to learn new skills.
“If you’re a working professional and want to learn something new, it’s a challenge to go to a class on Tuesdays and Thursdays,” said Ng. “MOOCs give more access to education. You have much more control over your learning.”
It’s not an either/or proposition, though. MOOCs and traditional education can co-exist and even complement each other. MOOCs and other online formats can provide a broad overview of or introduction to a subject, while classroom discussion offers the opportunity to debate, critique and analyze ideas.
“Most of the basics in introductory courses that are usually taught by overworked and underpaid adjunct faculty in big lecture halls can be taught much more effectively in a MOOC format,” said Klpper.
“The time in class should then be used to contextualize the theories that the students have learned about,” he said. “When faculty and students come face-to-face, they should use their time to discuss the epistemological questions — the questions where there are no easy answers.”
A World of Possibilities
Some instructors enjoy, in particular, the freedom MOOCs offer to try new course formats and reach a broad array of students.
“I love being able to bring a world-class educational experience to my students at an affordable price,” Chris Bryant, who teaches computer certification courses through Udemy, told TechNewsWorld.
“For me, there’s no travel, no hotel, no cramped meeting rooms to teach in. It also gives me more time to develop additional courses. I have three new courses debuting on Udemy within the next six months. I couldn’t do that if I were always traveling to training sites.”
MOOCs are evolving, and it remains to be seen how useful they will be for degree-seeking students and the educational institutions that serve them. For the time being, MOOC providers are expanding their offerings and singing the praises of the one thing no one can deny about MOOCs: They provide easy access to high-level learning to a greater number of people around the world than ever before.
“Many of the MOOCs are breaking down the barriers of affordability and access to higher education by unlocking some of the great educational content and teaching from top global academic institutions,” Dennis Yang, president and COO of Udemy, told TechNewsWorld. “By providing free access to this quality content, students around the world can learn from some of the best faculty members in a personalized fashion.” | <urn:uuid:9f4cd51a-386b-4ab4-8042-223f4804da55> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/moocs-tearing-down-the-ivory-tower-78909.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00304.warc.gz | en | 0.957273 | 1,033 | 2.609375 | 3 |
How Famous Companies Got Their Names?
Here is how some of the global companies were named. You see these big company names almost every day, but do you know what they actually mean and how they were named? To help you get the knowledge, we made a list of some of the famous global company names. Scroll down the list and then tell us how many did you already know?
How Famous Companies Got Their Names?
Nike: Named for the greek goddess of victory. The swoosh symbolises her flight.
Skype: The original concept was ‘Sky-Peer-to-Peer’, which morphed into Skyper, then Skype.
Mercedes: This was actually financier’s daughter’s name.
Adidas: The company name was taken from its founder Adolf (ADI) Dassler whose first name was shortened to the nickname Adi. Together with first three letters of his surname it formed ADIDAS.
Adobe: This came from the name of the river Adobe Creek that ran behind the house of founder John Warnock.
Apple Computers: It was the favourite fruit of founder Steve Jobs. He was three months late for filing a name for the business, and he threatened to call his company Apple Computers if the other colleagues didn’t suggest a better name by 5 o’clock.
CISCO: It is not an acronym as popularly believed. It’s short for San Francisco.
Compaq: This name was formed by using COMP, for computer and PAQ to denote a small integral object.
Corel: The name was derived from the founder’s name Dr. Michael Cowpland. It stands for COwpland Research Laboratory.
Google: The name started as a joke boasting about the amount of information the search-engine would be able to search. It was originally named ’Googol’, a word for the number represented by 1 followed by 100 zeros. After founders - Stanford graduate students Sergey Brin and Larry Page presented their project to an angel investor; they received a cheque made out to ’Google’. So, instead of returning the cheque for correction, they decided to change the name to Google.
Hotmail: Founder Jack Smith got the idea of accessing e-mail via the web from a computer anywhere in the world. When Sabeer Bhatia came up with the business plan for the mail service, he tried all kinds of names ending in ’mail’ and finally settled for hotmail as it included the letters "html" - the programming language used to write web pages. It was initially referred to as HoTMaiL with selective uppercasing.
Hewlett Packard: Bill Hewlett and Dave Packard tossed a coin to decide whether the company they founded would be called Hewlett-Packard or Packard-Hewlett.
Intel: Bob Noyce and Gordon Moore wanted to name their new company ’Moore Noyce’ but that was already trademarked by a hotel chain so they had to settle for an acronym of INTegrated ELectronics.
Lotus (Notes): Mitch Kapor got the name for his company from ’The Lotus Position’ or ’Padmasana’. Kapor used to be a teacher of transcendental Meditation of Maharishi Mahesh Yogi.
Microsoft: Coined by Bill Gates to represent the company that was devoted to MICROcomputer SOFTware. Originally christened Micro-Soft, the ’-’ was removed later on.
Motorola: Founder Paul Galvin came up with this name when his company started manufacturing radios for cars. The popular radio company at the time was called Victrola.
Sony: It originated from the Latin word ’sonus’ meaning sound and ’sonny’ as lang used by Americans to refer to a bright youngster.
SUN: Founded by 4 Stanford University buddies, SUN is the acronym for Stanford University Network. Andreas Bechtolsheim built a microcomputer; Vinod Khosla recruited him and Scott McNealy to manufacture computers based on it, and Bill Joy to develop a UNIX-based OS for the computer.
Apache: It got its name because its founders got started by applying patches to code written for NCSA’s httpd daemon. The result was ’A PAtCHy’ server - thus, the name Apache Jakarta (project from Apache): A project constituted by SUN and Apache to create a web server handling servlets and JSPs. Jakarta was name of the conference room at SUN where most of the meetings between SUN and Apache took place.
Tomcat: The servlet part of the Jakarta project. Tomcat was the code name for the JSDK 2.1 project inside SUN.
C: Dennis Ritchie improved on the B programming language and called it ’New B’. He later called it C. Earlier B was created by Ken Thompson as a revision of the Bon programming language (named after his wife Bonnie).
C++: Bjarne Stroustrup called his new language ’C with Classes’ and then ’newC’. Because of which the original C began to be called ’old C’ which was considered insulting to the C community. At this time Rick Mascitti suggested the name C++ as a successor to C.
GNU: A species of African antelope. Founder of the GNU project Richard Stallman liked the name because of the humour associated with its pronunciation and was also influenced by the children’s song ’The Gnu Song’ which is a song sung by a gnu. Also it fitted into the recursive acronym culture with ’GNU’s Not Unix’.
Java: Originally called Oak by creator James Gosling, from the tree that stood outside his window, the programming team had to look for a substitute as there was no other language with the same name. Java was selected from a list of suggestions. It came from the name of the coffee that the programmers drank.
LG: Combination of two popular Korean brands Lucky and Goldstar.
Linux: Linus Torvalds originally used the Minix OS on his system which here placed by his OS. Hence the working name was Linux (Linus’ Minix). He thought the name to be too egotistical and planned to name it Freax (free+freak+x). His friend Ari Lemmke encouraged Linus to upload it to a network so it could be easily downloaded. Ari gave Linus a directory called ’Linux’ on his FTP server, as he did not like the name Freax. (Linus parents named him after two-time Nobel Prize winner Linus Pauling).
Mozilla: When Marc Andreessen, founder of Netscape, created a browser to replace Mosaic (also developed by him), it was named Mozilla (Mosaic-Killer, Godzilla). The marketing guys didn’t like the name however and it was re-christened Netscape Navigator.
Red Hat: Company founder Marc Ewing was given the Cornell lacrosse team cap (with red and white stripes) while at college by his grandfather. He lost it and had to search for it desperately. The manual of the beta version of Red Hat Linux had an appeal to readers to return his Red Hat if found by anyone!
SAP: "Systems, Applications, Products in Data Processing", formed by 4 ex-IBM employees who used to work in the ’Systems/Applications/Projects’ group of IBM.
UNIX: When Bell Labs pulled out of MULTICS (MULTiplexed Information and Computing System), which was originally a joint Bell/GE/MIT project, Ken Thompson and Dennis Ritchie of Bell Labs wrote a simpler version of the OS. They needed the OS to run the game ’Space War’ which was compiled under MULTICS. It was called UNICS - UNIplexed operating and Computing System by Brian Kernighan. It was later shortened to UNIX.
SCO (UNIX): From Santa Cruz Operation. The company’s office was in Santa Cruz.
Xerox: The inventor, Chestor Carlson, named his product trying to say ’dry’ (as it was dry copying, markedly different from the then prevailing wet copying). The Greek root ’xer’ means dry.
Yahoo: The word was invented by Jonathan Swift and used in his book ’Gulliver’s Travels’. It represents a person who is repulsive in appearance and action and is barely human. Yahoo! founders Jerry Yang and David Filo selected the name because they considered themselves yahoos.
3M: Minnesota Mining and Manufacturing Company started off by mining the material corundum used to make sandpaper. It was changed to 3M when the company changed its focus to Innovative Products. | <urn:uuid:3f70798c-f651-4d9f-b695-a18991a3afeb> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/1114/how-famous-companies-got-their-names.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00304.warc.gz | en | 0.96423 | 1,885 | 2.515625 | 3 |
Who could have predicted that the simple act of getting from one place to another would one day become a dilemma for humankind? While there is a need to make transport safe, accessible, and affordable for every stratum. There is also an urgent need to reduce the emissions from the transport sector. It is, the second biggest contributor to global emissions accounting for 40% of the total emissions. And 75% of this comes from road transport alone. Amidst all the gloom, 'Sustainable transport' has emerged as a possible solution to man's mobility problems.
The United Nations describes sustainable transport as "the provision of services and infrastructure for the mobility of goods and people— advancing economic and social development to benefit today's and future generations—in a manner that is safe, affordable, accessible, efficient, and resilient while minimizing carbon and other emissions and environmental impact." Sustainable transport includes walking, cycling, green transport, and shared mobility. The latter consists of the integration of technology and transport to move more people with fewer vehicles. Shared Mobility is convenient, affordable, and aligns with sustainable development goals three and eleven. Research done by UC Berkely's transportation sustainability research center in 2016 states that a car-pooling service cuts down the vehicles on the road, the number of vehicle miles traveled, and the greenhouse emissions.
Though not a new concept, shared mobility has gained prevalence over the past decade because of the faster adoption of technology and decreasing popularity of car ownership among the millennials and GenZ. A recent study estimates that while 70% of urban households owned a car in 2019, only 35% are likely to own a car in 2040. According to McKinsey, US and China are currently the biggest markets for shared mobility, followed by European countries. And the sector is expected to grow at around 30% till 2030. What does this mean for the vehicle manufacturers? Original Equipment Manufacturers are witnessing increasing consumer interest and growth in the shared mobility space. While many have developed their shared mobility models to retain younger customers, others are currently in the discovery stage.
One of Asia's leading car manufacturers developed an omnichannel digital car rental platform with help from Nagarro. The car rental platform offers long-term and short-term rental options. The client witnessed a shift in customer interests towards a shared economy, commoditization of cars as a product, and increased car ownership costs in one of its major markets. They wanted to create an empathetic ownership platform that utilizes the automaker's vast dealer network & inventory in a better way. Our team created a platform that provides a host of features like digital customer verification, model selection over an extensive network of dealers, seamless payment, usage and settlement services, vehicle geo-fencing and immobilization, insurance, and service center integration. This made car renting convenient, economical, and digital.
Making a shared mobility platform comes with its own set of problems, especially in a post-pandemic world. The fear of getting the Covid 19 virus has made people apprehensive about sharing a vehicle with strangers. There are concerns about the subscription models and the tedious documentation process. On the supply side, the service providers are still figuring out cost-efficient models that maximize customer comfort, safety, and convenience. From engaging with the customers to understanding their needs and developing a product that meets those needs, the OEMs have the task cut out for them. To that end, Nagarro has developed a Shared Mobility Toolkit that serves as a framework for original equipment manufacturers (OEMs) interested in the shared mobility space.
|SELF DRIVEN||DRIVER DRIVEN|
Five key points to consider while developing a shared mobility model:
• Learn - Understanding commuters' preferences to ideate a vision for the product
Understanding the customer's needs and demands is the first step when making a product or service. The OEMs need to identify and analyze the customer goals, challenges, expectations, behaviors, and parameters relevant to vehicle subscription. To achieve that, they can ask questions like are the customers subscribing to the vehicle for daily commutes or occasional travels? What are their criteria for a subscription? The information received through these answers will be an input for the customer research process that captures a customer's thought process, actions, and emotional state.
• Co-Create - Leverage customer insights to create an intuitive mobility solution
Once the customer's needs and preferences are established, organizations can compare them with the existing services, if applicable. If developing a service from scratch, they must imagine how the customer interacts with the product from exploration to purchase to subscription management. All stakeholders must come together to innovate, ideate, and assess car-subscription use-cases by pooling knowledge from multiple functions and building rapid prototypes for quick value verification. Once the customer's needs and preferences are established, organizations can compare them with the existing services. In the case of any existing service, organizations must uncover opportunities to improve the stickiness of the current subscription app. They then create a subscription road map and define the Minimum Viable Product (MVP).
• Validate - Evaluate design decisions with actual customers through usability testing
Once the product/service concept is in place, the OEMs must test the prototypes with customers to recognize any usability issues and assess the customer satisfaction levels. It will help identify a range of topics such as the time spent by a customer finding a particular vehicle type, intuitiveness of the UI design, the average time customers spend in uploading the documents, and customer experience while exploring the subscription homepage. The inputs on these points will reveal unforeseen insights about the target consumer's behaviour. By identifying other possible loopholes, this step reduces the cost of correction and failure.
• Measure – Analyze KPIs and ROI to measure outcomes and adjust accordingly
The service provider's job is not complete with the delivery of the product/service to the customer. The after-sales service is equally essential for the success of a product or service. Organizations must assess if their product/service meets the business objectives using leading frameworks. This can be measured by determining the conversion rate of customers from exploring various subscription products to considering a specific vehicle, customer's churn rate, drop-off rate at the payment step, and several support calls received for subscription process clarification. Improvements and tweaks in the product will follow the assessments.
• Scale - Engineer a robust platform architecture and design system
With the entire cycle from conceptualization to assessment now complete, the OEMs must ensure a consistent user experience while updating the app regularly. A few points to consider while scaling are assessing the speed of rollouts in subsequent releases, consistency of design elements across devices and channels, governance, and transparency measures, and if the technology choices are. Are the technology choices sustainable to enhance and enrich the subscription experience continuously? Is the platform architecture future-proof? Understanding these functionalities will help provide a single source for building UIs, save time and money by decreasing maintenance and cost, ensure consistency and faster time-to-market while scaling up.
Shared mobility for sustainable cities
Shared mobility models are a subset of integrated mobility plans. They promote the adoption of other sustainable transport solutions. For instance, electric vehicles are better for shared mobility due to their higher mileage and efficiency. Similarly, autonomous vehicles will make shared mobility models more competitive and more likely to replace private vehicles. These mobility models bundled together make the transition towards sustainable cities much smoother. The most popular ride-sharing apps available right now are struggling to reach cost efficiency and make profits. The start-ups behind these apps are currently surviving on investor money and waiting for their models to turn cash positive.
A McKinsey analysis from 2017 indicates that "in 50 metropolitan areas around the world, home to 500 million people, integrated mobility systems could produce benefits, such as improved safety and reduced pollution, worth up to $600 billion." The same study further states that dense developing cities are positioned well to make an early transition to integrated mobility solutions. This shift could realize $600 million in annual societal benefits by 2030. High-income and low-density cities will take to private autonomy models where personal vehicles will continue to be in higher numbers. Still, technology will play an essential role in making sustainable use of these private vehicles through models such as car-pooling. The densely developed cities will likely advance toward a Seamless Mobility system before other cities due to the availability of high-quality public-transit systems, infrastructure-investment capacity, and expertise with public projects.
Challenges to the shared mobility approach
Beyond the economic and environmental benefits, these mobility models will decongest the cities and improve the quality of life. However, the shift towards urban mobility requires multiple stakeholders, including the OEMs, policymakers, administrators, and commuters, to work together. And the complexity of the mobility equations would vary from one city to another depending upon the demographics, infrastructure, topography, and cultural factors. The development and deployment of these mobility models require the involvement of stakeholders at all levels, from policymakers to manufacturers. In the case of e-hailing services, utilities will also have to step in to help set up the charging infrastructure of electric vehicles. With the involvement of multiple stakeholders and rapidly evolving customer needs, the road to sustainable transport will be gradual and challenging. Unexpected events such as the pandemic will slow down the adoption of the models only to be revived again. And technological advances such as autonomous vehicles will further complicate the equation.
Technology as a saviour
The only way to beat the odds and future proof against technology is to keep up with it. Investments in the right technologies are unavoidable for organizations interested in the future of sustainable transport, be it electric vehicles or shared mobility. When developing shared mobility models, organizations will need to deploy predictive maintenance, predictive and prescriptive analysis, backend cloud systems, big data solutions, enterprise integrations, and dispatch algorithms. For mobility as a Service (Mass), OEMs can use cloud platforms and open APIs for planning and booking and the Internet of Things for unified ticketing and payments. Blockchain-based transactions will enable real-time traffic management as we move towards a future with autonomous vehicles.
Having worked with leading automobile players worldwide, our team is experienced and skilled to help your organization take that leap towards the future of mobility.
Keen to talk, we are too! Reach us @ email@example.com
With inputs from Debapriyo Samanta, Automotive team. | <urn:uuid:70b46f86-28ff-4d60-ac81-3b4e68f19dc5> | CC-MAIN-2022-40 | https://www.nagarro.com/en/blog/shared-mobility-building-sustainable-cities | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00304.warc.gz | en | 0.935558 | 2,152 | 2.875 | 3 |
SEATTLE – Global warming, income inequality and access to opportunity are three of the great challenges of our time, and they are three challenges where open source peer platforms could play a role.
Speaking at the Linuxcon conference here, Robin Chase, founder of Zipcar, discussed her views on how an idea she refers to as ‘Peers Inc’ could quite literally save all human life on planet Earth.
With Zipcar, Chase said that she is trying to solve the challenge of resource utilization. With cars, there historically were only two ways to get a car, you could buy one or you could rent. With a car purchase the asset is typically only used five percent of the time, while with rentals people could only rent in 24 hours bundles.
“Either way you always had to buy more than you really needed,” Chase said. “I figured if you could just pay for what you use, the economics would transform the world.”
Chase explained that the key is about making it easier to share a car in less time than it takes to rent a car. She added that with car rental companies, many consumers also distrust them and there is an antagonistic relationship.
“With Zipcar, we make it more collaborative and we think of consumers as co-creators,” Chase said.
The Peers Inc model leverages excess capacity, people and platform in order to scale. The Internet has lowered transaction costs and now local customization can more easily be engineered.
The Peers Inc. model brings in the diversity of a massive number of users and then leverages the power of platform to enable participation.
“The power of peers comes together with platform in a yin/yang relationship and swims in the sea of excess capacity,” Chase said.
The peers model of scaling delivers diversity, innovation, resilience and redundancy, while the inc platform side provides the economies of scale.
Beyond the economic benefits for creators, Chase said that the Peers Inc model can also be a tool to save the planet. Chase noted that global warming is an increasingly growing problem and to date solutions for global warming have been mostly linear.
“With Peers Inc. we can defy the laws of physics,” Chase said. “We can build the largest hotel chain in the world with Airbnb in just four years.”
Additionally with Peers Inc. creators can tap the power of exponential learning. For example, Chase said that DuoLingo now has 90 million users, with 45 million of those users learning languages that DuoLingo users themselves contributed to the platform.
“Industrial capitalism is dead because the Internet exists and sharing is a better way of extracting more value,” Chase said.
Chase emphasized that shared network assets always deliver more value than closed assets and more networked minds are always greater in number than propriety minds. She added that the benefits of shared open assets are always larger that any of the problems of open assets.
“Whenever I participate I always get more than I give,” Chase said.
When it comes to figuring out how to address climate change, how to deal with the issue of income distribution and how to increase access to opportunity to build a new sustainable equitable economy, Chase is adamant that an open model of peer-based platform is the way forward.
“Peers Inc with open source can help to build the world we want to live in,” Chase said.
Zipcar founder Robin Chase
Sean Michael Kerner is a senior editor at Datatmation and InternetNews.com. Follow him on Twitter @TechJournalist | <urn:uuid:05226ec7-7079-42c8-8c62-73861e99117b> | CC-MAIN-2022-40 | https://www.datamation.com/open-source/linuxcon-open-source-peer-collaboration-could-save-the-planet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00304.warc.gz | en | 0.952409 | 748 | 2.546875 | 3 |
IoT Internet of Things (IoT) is an integrated networking system of physical things, represented digitally through devices that can be controlled from anywhere. The connectivity and hence the IoT has numerous possibilities.
According to some experts, the IoT will embody almost 50 billion objects by 2020.
“From gathering new data to automation of infrastructure, companies are finding many benefits from adding connectivity and intelligence to physical infrastructure,” reported CompTIA study.
The ‘things’ get connected to your devices through sensors, actuators, and software to enable services based on information and data. This technology has led to a large number of business and innovation opportunities that are affecting our day to day life, on a positive note. Mentioned below are the top 10 trending areas for the Internet of Things.
The more use of digital technology, the more is the requirement of Security. With the embedded processors which contain full-fledged and robust network-connected operating systems, the security threat rises of devices like home thermometers, smart TVs and cars.
Android security vendors are implementing increased security measures, but the rate of adoption is humbled by the swelling number of new appliances hitting the market with anonymous security chops.
Finding the right way to cope up with this is going to be the primary target of IoT companies this year. With increasing concerns and solutions for security, it is gradually getting easier to spot scams, or worse, on the platform.
Increase in the Internet of Things increases the data & information applied in it. That pushes them forward to Artificial Intelligence and Machine Learning.
Indeed, as we discussed in our earlier post-AI is becoming a necessity, as a load of data and complexity will only rise in the future.
These advanced technologies are assisting in the growth of IoT space. We see an intense wave of AI, machine learning and deep learning in IoT companies like Tachyus, MoBagel, Sentenai, Moov, Building Robotics, Glassbeam, Teradata and Sentrian.
These all are just a few of the many players that are making noise in this arena. Others are rapidly catching up to the trend.
Internet of Things is helping people in monitoring their medical conditions and assisting them with the daily treatment according to the data provided.
In many necessary conditions, physicians can also monitor the patient’s development.
Things like heart- rate and blood pressure monitors which are capable of monitoring implanted devices. Implants like Fitbit electronic wristbands, pacemakers and hearing aids are available as IoT.
A few hospitals implemented “smart beds” that can detect if they are occupied or if a patient is attempting to leave the bed.
It can also help in adjusting appropriate pressure and support to the patient without any physical interaction with nurses.
As now we know, the IoT involves internet-connected gadgets programmed in such a way that they could be manoeuvred accordingly, even without being present. It must have the correct software and wholesome data in its system to work correctly.
Another member set to help IoT thrive in cloud computing, which acts as a sort of battlefront. The Cloud has grown to be an integral part of the internet world.
Hence, one could say that the cloud works as a catalyst when it comes to IoT. The underlying idea behind the integration of IoT and Cloud is to increase efficiency in the day to day task, without affecting the quality of the data that is being stored or transferred, making the technology stream better yet smoother.
The relationship is mutual between the Internet of Things and Cloud Computing, as one is the source of data and the latter one is the ultimate destination to be stored.
Companies like Google, Microsoft, and Amazon AWS are set to become the undisputed leaders of IoT CloudTech Services.
Although it may sound lazy, however for the working-class people or maybe an extended family it may turn out to be a boon to their busy household.
Smart home techs allow the owner to manage and monitor a wide range of verticals like security, home monitoring, room controls, and energy efficiency. It primarily works as a remote control to your house.
Companies like Samsung with its Samsung smart fridge & media controllers, Amazon Echo, Eskesso, Genican and Google home are already adding a great deal of ease into everyone’s lives.
For example, the Nest thermostat helps in controlling energy consumption. It in turn also helps in lowering our bills while providing a comfortable environment, at home or office.
With Rachio Smart Sprinkler Controller you can stop worrying about adjusting your sprinkler according to the weather as the device do it for you. Amazon Echo is cloud-connected and voice-activated which makes things more accessible.
This year, IoT is going to confront many challenges including the Analytics and prediction models which are becoming an integral part of the Internet of Things applications.
There is an exploitation of the information and data collected by the objects, most possibly to understand user’s behavior.
That is, to improve services and products, to identify and predict market and business moments. It is especially the case if there is no use of AI and deep learning. It may raise concerns towards security too.
The increasing security measures and the different Internet of Things put pressure on various aspects of traditional analytics, data management infrastructure, and Business Intelligence Techniques.
It is giving IoT leaders a need to be proactive in detecting gaps and frailties.
Again, the systems need to get upgraded so that they could support IoT applications. Traditional operating systems (OSs) such as Windows and iOS consume too much power and lack features like the real-time response.
The IoT devices are growing in complexity too. In turn, it increases the application of sensors, data processing, and connectivity to send that data out.
The use of RTOS (Real-Time Operating System) IoT device is taking a huge leap. Knacks gained as a result of RTOS includes, multitasking, scheduling tasks and to monitor the resource and data sharing among various tasks. The IoT OS
Until now, the Internet of things has been suffering from market fragmentation. Fragmentation makes the task of developing applications much harder.
It is in consideration of both, hardware variations and differences in the software running on them.
With the surge of venture capital dollars running to IoT, there has been an inflow of organizations and solutions on the market, creating a very divided panorama within the IoT ecosystem.
As a result, this fragmentation has become one of the resistance in adopting IoT.
The IoT can support the integration of data and information processing while being in control through various transportation systems.
IoT is applicable to all the aspects of transportation systems, from vehicle to the user. Moreover, because of this dynamic interconnection, it enables inter and intra automobile communication, smart parking & traffic control, e-toll collection system, vehicle control, logistics and fleet management.
It further goes on to avail the safety and road assistance. The Internet of Things platform can continuously locate and monitor the cargo and asset’s status via sensors.
It can also share special alerts like messages of delays, any specific problems or threats.
According to Gartner, “The processors and architectures used by IoT devices explain many factors like, whether they have competent security and encoding, power consumption, or if it can smoothly support an operating system, updatable firmware, and embedded device management.”
Now we don’t need a crystal ball to anticipate IoT development in both consumer and industrial sector in the upcoming future.
Several things will emerge. Commercial and technical battles between these ecosystems will dominate the areas such as the smart city, and different entertainment department.
IoT establishes itself to be a dynamo that will create new innovative opportunities and address micro to macro global challenges.
You May Also Like to Read-
Five Ways To Boost Your Organic Reach On Twitter | <urn:uuid:beb9b27b-c216-4811-9936-7b494a7db236> | CC-MAIN-2022-40 | https://www.hitechnectar.com/blogs/top-10-emerging-iot-technologies-2018/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00304.warc.gz | en | 0.939702 | 1,619 | 2.625 | 3 |
Benefits vs risks of facial recognition technology
Facial recognition technology is being integrated into society at an increasing rate, with many people viewing it as a normal part of daily life.
Ping Identity says once a distant, futuristic concept, facial recognition technology is now found in many different technological applications with a variety of different functions.
With facial recognition, people can unlock a smartphone with a glance, tag their friends in Facebook posts, or superimpose one face onto another in photos. This kind of biometric technology has revolutionised authentication, making it quick, simple, and highly accurate.
According to Ping Identity, the facial recognition industry generated $3.8 billion in revenue in 2020 alone, and with facial recognition systems able to achieve up to 99.97% accuracy, the potential for this technology is huge. However, despite the many benefits of facial recognition technology, it is important to understand its risks to keep your personal data secure.
What Is Facial Recognition Technology?
Each day, users of smartphones, electronic devices, and social media generate millions of images and videos. This plethora of data, combined with data from CCTV cameras, have allowed the capabilities of FRT to improve. In FRT, machine learning and AI come together to look for patterns in the facial features of images and videos and ultimately determine identity.
Facial recognition technology has multiple benefits for society. Facebook's DeepFace software, which can recognise human faces with an accuracy rate of 97.25%, is implemented for use in crime prevention and security. FRT can also decrease the need for human interaction and thus increase efficiency during, for example, border checks in airports. FRT can even be employed in medicine, such as in identifying subtle facial traits to determine genetic disorders.
A large number of tech companies are developing facial biometric systems, including:
- Facebook's DeepFace technology has a true positive rate of 97.25%.
- Google's FaceNet was 99.63% accurate when matching 13,000 pictures of faces from across the web.
- Amazon's Rekognition is a cloud-based facial recognition service.
- Microsoft's Face API is another cloud-based facial recognition service.
- Gemalto's Cogent Live Face Identification System recognises faces in busy environments, allowing developers to create applications that match live faces with data from documents.
How Is Facial Recognition Technology Being Used?
Here are the top seven ways that FRT can improve our lives, according to Ping Identity:
- Enhancing cybersecurity
- Supporting police and public safety efforts
- Streamlining airport experiences
- Improving the accuracy of medical diagnoses and treatment
- Reducing human touchpoints
- Making banking easier
- Making shopping more efficient
Facial Recognition Technology: What Are the Risks?
As facial recognition evolves, it attracts cyber criminals who aim to compromise these developing systems, says Ping Identity.
In 2019, it was reported that hackers breached Apple's iPhone FaceID user authentication in just 120 seconds. Thus, despite the many benefits of facial recognition technology, it is important to assess the risks and downsides to keep your personal data secure.
1. It can violate individual and societal privacy
The threat to individual privacy is a significant downside of facial recognition technology. People don't like having their faces recorded and stored in a database for unknown future use.
2. It creates data vulnerabilities
Databases storing facial recognition data have the potential to be breached. Hackers have broken into databases containing facial scans collected and used by banks, police departments, and defense firms.
3. It provides opportunities for fraud and other crimes
Lawbreakers can use facial recognition technology to perpetrate crimes against innocent victims. They can collect individuals personal information, including imagery and video collected from facial scans and stored in databases, to commit identity fraud. With this information, a thief could open credit cards or bank accounts in the victims name or even build a criminal record using the victims identity.
4. Technology is imperfect
Facial recognition isn't perfect. For example, its less effective at identifying women and people of colour than white males. The technology depends upon algorithms to make facial matches. Those algorithms are more robust for white men than women and people of colour because the databases contain more data on white men. This creates unintentional biases in the algorithms.
5. Technology can be fooled
Other factors can affect the technology's ability to recognise peoples faces, including camera angles, lighting levels, and image or video quality. People wearing disguises or slightly changing their appearance can throw off facial recognition technology, too.
"Although there are many positive outcomes arising from the use of FRT, there are a variety of security concerns linked to its use," says Ping Identity.
"From tracking individuals who have not given consent and the invasion of privacy to hackers finding ways to infiltrate the newly arising software, users and developers alike must take caution when considering the technology."
According to Ping Identity, hackers can easily gain access to systems and networks with underdeveloped passwords, including information such as your date of birth, full name, or easily guessable phrases.
"This issue is mirrored when it comes to biometrics, resulting in personal genetic data being retrieved. Businesses should consider biometric technology such as FRT to be used in unison with other security elements like passwords and MFA," it says.
"This will allow for enhanced security, with systems that must verify multiple components rather than an isolated method, minimising the risk of attacks.
"Fingerprint and voice recognition security has become easier to hack by cybercriminals, and although FRT has proven to be more difficult to infiltrate, hackers are already looking for ways to replicate people's faces." | <urn:uuid:2e30c451-ecd6-4bab-b277-cabad3b04f81> | CC-MAIN-2022-40 | https://securitybrief.asia/story/benefits-vs-risks-of-facial-recognition-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00304.warc.gz | en | 0.930437 | 1,165 | 2.828125 | 3 |
For the very first time, scientists working on the Large Hadron Collider have discovered something they call the ‘X’ particle. This was discovered and observed while smashing billions of lead ions together at incredible speed. So what does this actually mean?
Join the Komando Community
Get even more know-how in the Komando Community! Here, you can enjoy The Kim Komando Show on your schedule, read Kim's eBooks for free, ask your tech questions in the Forum — and so much more. | <urn:uuid:3bd20ea6-e3db-496b-a270-68409f68701b> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/scientists-just-discovered-an-x-particle/828013/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00304.warc.gz | en | 0.941391 | 107 | 2.75 | 3 |
What exactly is the general data protection regulation (GDPR)?
GDPR is a data privacy law created to protect the data of European Union (EU) citizens residing within the member states. These citizens, also called a data subject, are at the core of GDPR. In the first place, this data privacy law is created to protect the human rights of individuals whose data have been collected. This includes the rights listed on GDPR Chapter 3, such as rights to erasure and data portability,
Enforced by the European Commission and the Data Protection Authority (DPA), the primary goal of the GDPR is to protect the privacy of EU residents.
It’s also one of the strictest data protection laws that companies need to comply with. After its creation in 2018 by the European Parliament, GDPR has sparked several regional data protection laws worldwide. This includes the California Consumer Privacy Act (CCPA) and UK’s GDPR policies.
Customer data is used in every single corner of a customer’s experience within your company, from marketing to product experience to support tickets. Changes in GDPR are worth paying attention to as they could mean a big change across various functions within your companies.
A Brief History of the GDPR
GDPR was created to replace the 1995 Data Protection Directive used across various European countries.
After the internet becomes commonplace, the EU parliament decided they need a new guideline that adapts to a more connected world where data is the common currency. The GDPR is designed to better fit modern technologies and practices.
The 1995 data protection law allows each country to control and customize its own privacy laws. This makes it harder for businesses to introduce their service between countries since they’d have to refer to multiple privacy requirements and keep up with all of them.
The GDPR eliminates all this since now businesses only need to refer to one guideline and requirement to do business across all EU member states.
The GDPR officially started taking effect on May 25, 2018 and ever since has been the north star for privacy laws.
It has also undergone several changes in the past few years. Notably, in 2021 the GDPR introduced major changes to its terms. For one, GDPR removed the Privacy Shield that was put in place to make it easier for US companies to do business with EU citizens. The other major change introduced in 2021 would be the regulations for cookie consent, as GDPR now prevents companies from blocking access to content unless a user consents to cookies.
UK’s GDPR, not to be confused with the EU General Data Protection Regulation, is a standard based on the EU version created by the UK Information Commissioner’s Office (ICO) and included within their 2018 Data Protection Act. This data protection law serves as a substitute for the EU version after Brexit. If you regularly process data of Europe-based customers, you’d have to adhere to both European data protection laws.
Penalties for Non-compliance
Non-compliance with the GDPR can lead to major issues.
Even if you’re using your customer’s data ethically, there’s still the threat of outside forces. GDPR requirements ensure that you’re putting in the effort to protect your customer’s data to prevent losses caused by a data breach.
According to the GDPR, data protection entities in the EU can give fines of up to €20 million (around $20.3 million) or 4% of the worldwide turnover of the preceding financial year—whichever is bigger.
The fine may differ depending on the company’s offenses. If this is a repeat offense, you might get an even larger amount. Data controllers also have more responsibility to safeguard your customer’s personal data compared to processors. The personal data, in this case, includes any data linked to a living person in the EU, such as identifiers, IP addresses, and biometric data.
Ever since its enforcement, GDPR has caused several major companies to spend millions of dollars in fines.
France’s data authorities fined Amazon €35 million in 2020 because it wasn’t asking users for their consent when adding cookies to visitors’ browsers.
Tips for GDPR Compliance
GDPR fines are scarily large, but they’re necessary to ensure that companies are protecting user privacy according to the strict guidelines. Here are a few tips to make sure that you’re not missing anything to comply with GDPR.
Read through and understand the GDPR.
GDPR’s official documents are massive. There are 11 chapters and 99 articles in total, but this is necessary to make sure that the guidelines are comprehensive.
The articles pertain to different scenarios and topics.
While there are checklists and article round-ups to help you figure out what the GDPR means, it’s still best that you read through the original document at least once.
Additionally, since the GDPR is still constantly evolving, following news regarding the GDPR would be ideal. Major updates, such as the ones introduced in 2021, might bring big changes to how your companies work.
Look to other organizations.
GDPR is a big undertaking, but there are others who have done it before you. Find out how others, preferably companies with the same scale and industry, managed to reach compliance and see if it’s a viable option for your company.
You can also treat news reports on companies who have been fined as a cautionary tale. Make sure that you’re not making the same mistakes that they did and recheck your compliance requirements.
Pay close attention to your website and data.
Your website is the most probable entry point for any data from new customers. Start an audit to make sure that your website fulfills the GDPR guideline. Here’s a few things to check for:
- Is your cookie consent forceful? According to the GDPR, customers need to consent to cookies explicitly, of their own free will. For example, restricting access to your website is a violation of the GDPR, and scrolling through your website doesn’t mean that they consented either.
- If you have an opt-in form for your newsletter, make sure that customers have to go out of their way to agree to receive emails from you. Pre-ticked boxes violate the GDPR and might be grounds for fines.
Additionally, keep track of the data processing activities. Take responsibility and make sure that the tools you used to collect, process, and store data are GDPR compliant.
Appoint a data protection officer.
According to the GDPR, a DPO is required if you have over 250 employees. Having one on your side can help you navigate through the GDPR with more clarity.
You’d also need a DPO if your collected data is:
- processed at a large scale
- processed by a public authority
- undergoes systematic monitoring
Ideally, the DPO should be familiar with how the supervisory authorities enforce the GDPR in the area where the processing of personal data happens. This would help them understand the nuances of the GDPR as directed by the local DPA.
Even if you don’t have a DPO, consider asking a consultant who’s familiar with the GDPR to help you reach and maintain compliance. Here are a few tasks you can assign to a DPO.
- Monitor data transfer protocols and find gaps in the process
- Analyze risks with the Data Protection Impact Assessment (DPIA)
- Send breach notifications in the event of a security breach
Train your team to be GDPR compliant with Inspired eLearning
If you’re serving customers in the EU, it’s highly likely that you’ll need to comply with the GDPR.
Since its establishment in 2018, the GDPR is still one of the strictest data privacy standards that will help you ensure both your data security and your customer’s privacy rights.
That said, reaching GDPR compliance isn’t easy.
You need to collaborate with all members of your organization to reach and maintain compliance.
Inspired eLearning offers a comprehensive GDPR training on how members of your team can help ensure compliance. Starting from basic knowledge, such as data protection principles, to techniques like how you identify and process sensitive data.
Improve data security even further by adding security awareness training to your program and educating employees on how to keep customer data safe from both external and internal forces. See our security awareness training program today. | <urn:uuid:8c5f546c-3e97-4a45-8a21-f885c2338dc4> | CC-MAIN-2022-40 | https://inspiredelearning.com/blog/a-brief-history-of-the-gdpr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00304.warc.gz | en | 0.948456 | 1,831 | 3.34375 | 3 |
Detailing about IP Addresses
Each computer or device that is a part of the computer network is assigned a unique string of numerics that enables the computer and its attached devices to communicate securely. This assigned string of numerics is what is termed as an IP address or Internet Protocol address. The IP addresses help in locating the geographical area of the devices and even the host address to which they are connected. This is the chief function of the IP address serving the functions of networking protocols. In a TCP/IP network, this IP address is made of 32-bit combination of numbers and this helps in exact identification of the device that may be a personal laptop, computer, mobile, printer, tablet or even router. With cloud environments, data moves to multiple IP addresses; thus it becomes necessary for enterprises to achieve cloud security with IP restriction, which can be achieved with CASB solution.
The flexibility of Cloud Work Environments and Need for Cloud Security with IP Restriction
Cloud computing has invaded enterprises and why not? It offers a host of benefits with multiple cloud apps running simultaneously in multiple networks of devices and this has rendered mobility of the network resources even though the devices are located miles apart. This increased expanse of the network has highly benefitted the enterprises and their employees considerably. The productivity has increased due to flexibility and mobility and now it is possible to work in an all-time conducive environment of 24*7, 365 days a year. But keeping all the advantages on one side, one aspect which enterprises need to get their heads into is the security aspect. The organizations need to efficiently and effectively manage their multiple IP addresses over a wide range of sub-networks or a single large network. This has to be done so as to ensure that the confidential data does not get into wrong hands and severely hamper enterprise security.
Data Being Accessible from Multiple IP Addresses
The confidential data of the enterprises are placed miles apart in multiple IP addresses of the cloud servers and the protection of it is the prime concern of all. Though the use of cloud servers over multiple IP addresses has a host of benefits to employees and organizations, the fact cannot be negated that even the cybercriminals are constantly working their way out to get hold of all the possible vulnerable sensitive data of the enterprises. It may be this reason that there is a certain fear among organizations to embrace cloud technology. Some enterprises feel that data security is a burden when moving to cloud services and safeguarding their data will prove to be a costly affair. But the fact is that most of the data thefts are committed internally in an organization by its employees in connivance with the outsiders who lure them with financial gains to serve their own vested interests. In any event, enterprises must be ready with the appropriate data security measures.
CASB for Cloud Security with IP Restriction
Cloud Access Security Brokers (CASB) is the answer that enterprises are seeking for their data security. CASB solution intermediates between cloud service providers and enterprises. They help in securing and protecting the data at all times. CASB solutions restrict unwanted activities, thus keeping the cloud environment clean and safe for their client-enterprises. Such solutions allow the implementation of policies that can be designed so as to exactly define and restrict the data flow across various network channels of a corporate network. CASB solutions with their four pillars of compliance, threat-protection, visibility, and cloud data security have really emerged as the forerunner solutions in providing security to cloud-based apps and as per Gartner predictions, 85% of the firms would be deploying them by the year 2020. CASB solutions help enterprises to define security policies with IP restrictions in giving access to data to various IP addresses of its employees.
CloudCodes CASB Solution for Cloud Security with IP Restriction Feature
How does the CloudCodes CASB solution for cloud security with IP Restriction work? This restricts one or some pre-defined multiple IP addresses to specific users. So, these users can only access the data from the specified IP addresses, and other addresses, which are not defined, cannot be accessed by them. Thus IP Restriction feature, which is present in the CloudCodes for Business Access control module, helps to prevent unwanted data breaches by keeping a check on the access from different IP addresses. By imposing cloud security with IP restriction policies using our CloudCodes CASB solutions, data security can be enhanced and visibility into the data can be improved for increased security governance. | <urn:uuid:15a36b6a-b119-49af-a64c-451d0c67d1e4> | CC-MAIN-2022-40 | https://www.cloudcodes.com/blog/cloud-security-with-ip-restriction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00504.warc.gz | en | 0.951753 | 904 | 2.875 | 3 |
Data matching is about linking entities in databases that don’t have a common unique key and are not spelled exactly the same but are so similar, that we may consider them representing the same real world object.
When matching we may:
- Compare the original data rows using fuzzy logic techniques
- Standardize the data rows and then compare using traditional exact logic
As suggested in the title of this blog post a common problem with standardization is that this may have two (or more) outcomes just like this English word may be spelled in different ways depending on the culture.
Not at least when working with international data you feel this pain. In my recent social media engagement I had the pleasure of touching this subject (mostly in relation to party master data) on several occasions, including:
- In a comment to a recent post on this blog Graham Rhind says: Based just on the type of element and their positions in an address, there are at least 131 address formats covering the whole world, and around 40 personal name formats (I’m discovering more on an almost daily basis).
- Rich Murnane made a post with a fantastic video with Derek Sivers telling about that while we in many parts of the world have named streets with building number assigned according to sequential positions, in Japan you have named blocks between unnamed streets with building numbers assigned according to established sequence.
- In the Data Matching LinkedIn group Olga Maydanchik and I exchanged experiences on the problem that in American date format you write the month before the day in a date, while in European date format you write the day before the month.
In my work with international data I have often seen that determining what standard is used is depended on both:
- The culture of the real world entity that the data represents
- The culture of the person (organisation) that provided the data | <urn:uuid:521b2dac-8a73-445a-bea9-f2d55bb8ec9e> | CC-MAIN-2022-40 | https://liliendahl.com/2010/03/07/standardise-this-standardize-that/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00504.warc.gz | en | 0.925704 | 380 | 3 | 3 |
Alphabet, the parent company of Google, is combining two of its most ambitious research initiatives — robotics and AI language understanding — to create a “assist robot” that can comprehend directions given in plain language.
The Verge reports that Alphabet has been creating robots that can perform basic jobs like fetching beverages and cleaning surfaces since 2019.
Most robots only respond to short and simple instructions, like “bring me a bottle of water”. But LLMs like GPT-3 and Google’s MuM can better parse the intent behind more oblique commands.
In Google’s example, you might tell one of the Everyday Robots prototypes, “I spilled my drink, can you help?” The robot filters this instruction through an internal list of possible actions and interprets it as “fetch me the sponge from the kitchen”.
The system that resulted has been given the moniker PaLM-SayCan by Google, which encapsulates how the model combines the “affordance grounding” of its robots with the language understanding abilities of LLMs
According to Google, its robots were able to plan the right responses to 101 user commands 84% of the time and carry them out 74% of the time after adding PaLM-SayCan. | <urn:uuid:57b29513-ba52-4edf-81db-d959d768670d> | CC-MAIN-2022-40 | https://enterpriseviewpoint.com/alphabets-assistant-robots-now-include-ai-language-understanding-due-to-google/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00504.warc.gz | en | 0.917175 | 264 | 3.09375 | 3 |
What is an analog telephone line?
A simple analog telephone line is a voice circuit historically made from copper wire that runs from your local phone company's Central Office (C.O.) building to your business location. The central office houses telephone switching equipment that connects you to the Public Switched Telephone Network (PSTN). This is sometimes referred to as the local loop. Once the analog telephone line is installed, someone can call the phone number that you have subscribed to, and the phone company can connect the call to you.
Good points to remember about analog telephone lines are that
One phone number is associated with one line
- One line can handle one conversation at a time; when the line is in use, a busy signal is heard
How do analog telephone lines work?
Analog telephone lines transmit voice as electrical signals. When you speak into the handset of your phone, the microphone converts the sound waves into analog electrical waves. These waves propagate over the telephone line to their destination. The receiving phone then converts the the electrical signals back into sound waves through the speaker of the handset.
Other names for analog telephone lines
Analog telephone lines are referred to in a variety of ways. Here are some of the terms you may hear from an installation professional or a service provider
- C.O. Line – Refers to the fact that the line connects you to the Central Office
- Copper Line – Refers to the historical medium that carries analog signals, namely copper
- POTS Line – Plain Old Telephone Service
- Analog Line – Refers to the analog electrical signal used to transmit voice
The Trunking Concept
Sometimes people incorrectly refer to a single analog telephone line as an analog trunk. Trunking refers to the concept that many users can access the telephone network through sharing a set of lines instead of each receiving one individually. Think of a tree trunk: all of the branches share one trunk and through this connection are all granted access to the nutrients in the soil. Similarly, every phone extension in your office has access to the public switched telephone network through a smaller set of analog telephone lines.
If you have a small office (1 – 3 phones), each telephone can be connected to the local loop and then receive its' own phone line. However, if your office is growing and you need to connect many phone extensions to the PSTN, it just doesn't make financial sense to pay for separate lines to each phone. In most circumstances every employee does not need to be on the phone at the same time. Instead, by using the trunking concept you can reduce the amount of telephone lines you pay for while servicing every phone in your business. In fact typical business phone systems are configured in ratios of 3-4 telephone lines for every 8 phone extensions.
When should I choose analog telephone lines for my business?
Analog telephone lines should be considered for small and medium sized businesses that require up to 15 incoming lines. When more than 15 are required, a digital line (e.g. T1/PRI) is usually a better choice in terms of both cost and features. | <urn:uuid:244a45b1-e0e5-4119-8d41-d452fa9c6130> | CC-MAIN-2022-40 | https://www.metrolinedirect.com/what-is-an-analog-telephone-line.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00504.warc.gz | en | 0.941997 | 632 | 3.359375 | 3 |
During a recent malware hunt, the Cato research team identified some unique attributes of DGA algorithms that can help security teams automatically spot malware on their network.
The “Shimmy” DGA
DGAs (Domain Generator Algorithms) are used by attackers to generate a large number of – you guessed it – domains often used for C&C servers. Spotting DGAs can be difficult without a clear, searchable pattern.
Cato researchers began by collecting traffic metadata from malicious Chrome extensions to their C&C services. Cato maintains a data warehouse built from the metadata of all traffic flows crossing its global private backbone. We analyze those flows for suspicious traffic to hunt threats on a daily basis.
The researchers were able to identify the same traffic patterns and network behavior in traffic originating from 80 different malicious Chrome extensions, which were identified as from the Bujo, Dealply and ManageX families of malicious extensions. By examining the C&C domains, researchers observed an algorithm used to create the malicious domains. In many cases, DGAs appear as random characters. In some cases, the domains contain numbers, and in other cases the domains are very long, making them look suspicious.
Here are a few examples of the C&C domains (full domain list at the end of this post):
The most obvious trait the domains have in common is that they are all part of “.com” TLD (Top-Level Domain). Also, all the prefixes are five to eight letters long.
There are other factors shared by the domains. For one, they all start with consonants and then create a pattern that is built out of consonants and vowels; so that every domain is represented by consonant + vowel + consonant + vowel + consonant, etc. As an example, in jurokotu.com domain, removing the TLD will bring “jurokotu”, and coloring the word to consonants (red) and vowels (blue) will show the pattern: “jurokotu”.
From the domains we collected, we could see that the adversaries used the vowels: o, u and a, and consonants: q, m, s, p, r, j, k, l, w, b, c, n, d, f, t, h, and g. Clearly, an algorithm has been used to create these domains and the intention was to make them look as close to real words as possible.8 Ways SASE Answers Your Current and Future Security & IT Needs [eBook]
“Shimmy” DGA infrastructure
A few additional notable findings are related to the same common infrastructure used by all the C&C domains.
All domains are registered using the same registrar – Gal Communication (CommuniGal) Ltd. (GalComm), which was previously associated with registration of malicious domains .
The domains are also classified as ‘uncategorized’ by classification engines, another sign that these domains are being used by malware. Trying to access the domains via browser, will either get you a landing page or HTTP ERROR 403 (Forbidden). However, we believe that there are server controls that allow access to the malicious extensions based on specific http headers.
All domains are translated to IP addresses belonging to Amazon AWS, part of AS16509. The domains do not share the same IP, and from time to time it seems that the IP for a particular domain is changed dynamically, as can be seen in this example:
Given all this evidence, it’s clear to us that the infrastructure used on these campaigns is leveraging AWS and that it is a very large campaign. We identified many connection points between 80 C&C domains, identifying their DGA and infrastructure. This could be used to identify the C&C communication and infected machines, by analyzing network traffic. Security teams can now use these insights to identify the traffic from malicious Chrome extensions.
bacugo[.]com bagoj[.]com baguhoh[.]com bosojojo[.]com bowocofa[.]com buduguh[.]com bujot[.]com bunafo[.]com bunupoj[.]com cagodobo[.]com cajato[.]com copamu[.]com cusupuh[.]com dafucah[.]com dagaju[.]com dapowar[.]com dubahu[.]com dubocoso[.]com dudujutu[.]com focuquc[.]com fogow[.]com fokosul[.]com fupoj[.]com fusog[.]com fuwof[.]com gapaqaw[.]com garuq[.]com gufado[.]com hamohuhu[.]com hodafoc[.]com hoqunuja[.]com huful[.]com jagufu[.]com jurokotu[.]com juwakaha[.]com kocunolu[.]com kogarowa[.]com kohaguk[.]com kuqotaj[.]com kuquc[.]com lohoqoco[.]com loruwo[.]com lufacam[.]com luhatufa[.]com mocujo[.]com moqolan[.]com muqudu[.]com naqodur[.]com nokutu[.]com nopobuq[.]com nopuwa[.]com norugu[.]com nosahof[.]com nuqudop[.]com nusojog[.]com pocakaqu[.]com ponojuju[.]com powuwuqa[.]com pudacasa[.]com pupahaqo[.]com qaloqum[.]com qotun[.]com qufobuh[.]com qunadap[.]com qurajoca[.]com qusonujo[.]com rokuq[.]com ruboja[.]com sanaju[.]com sarolosa[.]com supamajo[.]com tafasajo[.]com tawuhoju[.]com tocopada[.]com tudoq[.]com turasawa[.]com womohu[.]com wujop[.]com wunab[.]com wuqah[.]com
References: https://www.catonetworks.com/blog/threat-intelligence-feeds-and-endpoint-protection-systems-fail-to-detect-24-malicious-chrome-extensions/ https://awakesecurity.com/blog/the-internets-new-arms-dealers-malicious-domain-registrars/ | <urn:uuid:60a2fdd0-ef59-43a3-be4b-2795d02e3f47> | CC-MAIN-2022-40 | https://www.catonetworks.com/blog/the-dga-algorithm-used-by-dealply-and-bujo/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00704.warc.gz | en | 0.784735 | 1,672 | 3.15625 | 3 |
Robocalls are occurring at a higher rate than ever before. Between January 1, 2022 and June 30, 2022 U.S. citizens received over 24 billion robocalls. Robocalls are a known frustration for consumers – in fact, 79% of unknown calls are unanswered due to the onslaught of illegitimate calls. (State of the Call (hiya.com)) Consumers aren’t the only individuals that are negatively impacted. Contact center agents experience robocalls, too. And they can’t ignore them.
Robocalls to contact centers are more than just annoying. They also waste precious agent time, consume resources that could be spent on legitimate callers, and increase caller wait times.
The difference between spam calls, scam calls, and robocalls
Many people lump spam calls, scam calls, and robocalls into the same group. In reality, they’re different from each other, which changes how your contact center can handle each case.
Spam call: A type of unwanted call that occurs when a person/organization calls a very large group of people at once.
Scam or fraud call: A type of unwanted call that occurs when a person/organization calls with an illegal intent.
Robocall: A type of automated call that occurs when a person/organization delivers a recorded message. There’s more to robocalls than spam or scam calls:
- Not all robocalls are bad – they can also be sent by political parties or to provide updates to a group of people.
- Robocalls fall under spam calls because they are made to a large group of people.
- If the robocall is also created with illegal intent, like to steal money or data from consumers, it can also fall under the definition of a scam call.
- Unwanted robocalls often use phone spoofing, which occurs when scammers mask their real number with a fake, known, or local number. The intent behind spoofing is to trick call recipients into answering the call.
So, why does the definition matter? The answer is that some technologies exist to allow contact centers to screen out some robocalls before they reach agents, alleviating wasted time and resources, and decreasing caller wait times.
Mitigating robocalls with inbound spam screening
Contact centers can combat time-consuming robocalls with inbound spam screening, which operates through workflow automation.
IntelePeer’s templated Inbound Spam Protection workflow is part of IntelePeer’s Communication API platform, which helps contact centers modernize without replacing existing infrastructure. In addition to Inbound Spam Protection, IntelePeer’s Communication APIs allow contact centers to integrate customer service automation features like real-time voice, messaging, chat, and more. The Communication API layers over the top (OTT) of the current contact center, turbocharging contact center capabilities without the cost and disruption of moving to a new service provider.
How Inbound Spam Protection works
Inbound Spam Protection assesses every call that comes into the contact center. Because IntelePeer’s Communication API platform layers on top of the contact center, the Communication API interacts with the call first. Calls can be from a human caller or a robocaller. When a call comes in, the workflow will make an API query to an anti-spam engine, which has already conducted an analysis of the calling number based on the nationwide calling pattern from that number. The anti-spam engine will determine if the call has no risk of being spam/scam, is moderate risk, or is high risk. The API query is automatic, adding no perceptible delay to the inbound call.
If the anti-spam engine indicates to the Communication API platform that the calling number is rated as “no risk”, the call moves through to the contact center. If the call is rated as moderate or high risk, a request is sent to the caller to confirm that they are human, such as dialing a certain number to complete the call. For example, the confirmation may be as simple as an instruction to press 1 to talk with an agent. The calls that do not interact and confirm will be directed away from the contact center. Callers that confirm they are human will move through to the contact center without knowing that they were initially marked as a risk.
The advantages of leveraging Inbound Spam Protection
Decreasing inbound spam is beneficial to the organization itself, the agents, and the customers. Inbound Spam Protection can help:
Streamline agent productivity: Eliminating robocalls reduces the time agents spend answering illegitimate calls. Instead, agents can increase their productivity by putting more time towards calls to prospects and customers.
Decrease caller queue time: Using Inbound Spam Protection means customers will spend less time waiting in call queues and decreases the risk of abandoned calls.
Improve contact center data: Robocalls skew metrics; reports can’t tell the difference between a real caller and robocaller, which means that call quantities are inflated, call abandonment rates are artificially high, and average call duration is low. Removing robocalls from the equation cleans up data, so that contact centers can rely on their metrics to make important decisions.
Are you ready to learn more about how IntelePeer’s Communication API platform and Inbound Spam Protection can turbocharge your contact center? Contact us today. | <urn:uuid:092c9c7e-9a9a-48ba-8555-26534500d1e9> | CC-MAIN-2022-40 | https://intelepeer.com/blog/contact-centers-how-to-mitigate-inbound-spam-and-robocalls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00704.warc.gz | en | 0.932162 | 1,131 | 2.515625 | 3 |
Packet Tracer file (PT Version 7.1): https://bit.ly/2tfAOLp
Get the Packet Tracer course for only $10 by clicking here: https://goo.gl/vikgKN
Get my ICND1 and ICND2 courses for $10 here: https://goo.gl/XR1xm9 (you will get ICND2 as a free bonus when you buy the ICND1 course).
For lots more content, visit http://www.davidbombal.com – learn about GNS3, CCNA, Packet Tracer, Python, Ansible and much, much more.
Enhanced Interior Gateway Routing Protocol (EIGRP) is an advanced distance-vector routing protocol that is used on a computer network for automating routing decisions and configuration. The protocol was designed by Cisco Systems as a proprietary protocol, available only on Cisco routers. Partial functionality of EIGRP was converted to an open standard in 2013 and was published with informational status as RFC 7868 in 2016.
EIGRP is used on a router to share routes with other routers within the same autonomous system. Unlike other well known routing protocols, such as RIP, EIGRP only sends incremental updates, reducing the workload on the router and the amount of data that needs to be transmitted.
EIGRP replaced the Interior Gateway Routing Protocol (IGRP) in 1993. One of the major reasons for this was the change to classless IPv4 addresses in the Internet Protocol, which IGRP could not support.
Almost all routers contain a routing table that contains rules by which traffic is forwarded in a network. If the router does not contain a valid path to the destination, the traffic is discarded. EIGRP is a dynamic routing protocol by which routers automatically share route information. This eases the workload on a network administrator who does not have to configure changes to the routing table manually.
In addition to the routing table, EIGRP uses the following tables to store information:
Neighbor Table: The neighbor table keeps a record of the IP addresses of routers that have a direct physical connection with this router. Routers that are connected to this router indirectly, through another router, are not recorded in this table as they are not considered neighbors.
Topology Table: The topology table stores routes that it has learned from neighbor routing tables. Unlike a routing table, the topology table does not store all routes, but only routes that have been determined by EIGRP. The topology table also records the metrics for each of the listed EIGRP routes, the feasible successor and the successors. Routes in the topology table are marked as “passive” or “active”. Passive indicates that EIGRP has determined the path for the specific route and has finished processing. Active indicates that EIGRP is still trying to calculate the best path for the specific route. Routes in the topology table are not usable by the router until they are inserted into the routing table. The topology table is never used by the router to forward traffic. Routes in the topology table will not be inserted into the routing table if they are active, are a feasible successor, or have a higher administrative distance than an equivalent path.
Information in the topology table may be inserted into the router’s routing table and can then be used to forward traffic. If the network changes (for example, a physical link fails or is disconnected), the path will become unavailable. EIGRP is designed to detect these changes and will attempt to find a new path to the destination. The old path that is no longer available is removed from the routing table. Unlike most distance vector routing protocols, EIGRP does not transmit all the data in the router’s routing table when a change is made, but will only transmit the changes that have been made since the routing table was last updated. EIGRP does not send its routing table periodically, but will only send routing table data when an actual change has occurred. This behavior is more inline with link-state routing protocols, thus EIGRP is mostly considered a hybrid protocol.
When a router running EIGRP is connected to another router also running EIGRP, information is exchanged between the two routers. They form a relationship, known as an adjacency. The entire routing table is exchanged between both routers at this time. After the exchange has completed, only differential changes are sent.
EIGRP is often considered a hybrid protocol because it is also sends link state updates when link states change.
This is another packet tracer EIGRP troubleshooting lab.
In this lab, we’ve been told that router 1 is not able to ping router 4.
So as an example, looking at the console of router 1, router 1 is not able to ping the loopback of router 4. Router 1 is not able to ping the gigabit interface on router 4.
Now as per other troubleshooting labs, try and resolve the problems in this network; try and do that without using the show run command. So in other words, use other show commands to find the problem and then fix the network and verify that it works… | <urn:uuid:5ac067d3-23bb-4d89-b55c-30a75b2cdae2> | CC-MAIN-2022-40 | https://davidbombal.com/cisco-ccna-packet-tracer-ultimate-labs-eigrp-troubleshooting-lab-4-can-complete-lab/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00704.warc.gz | en | 0.940499 | 1,080 | 3.140625 | 3 |
Ndiff is a tool that it can be used to compare two nmap scan files and highlights any changes between them.In order to compare the scans,the files in nmap must be saved in text or xml format.Ndiff will point out the differences between them for easy comparison by using plus and minus signs.
Lets say that we want to compare two scans of a single host.We will use the option -oX and a filename.xml which will save the nmap outputs in a xml file.
As we can see from the first scan the host has only two ports open while in the second has 5.Now lets try to compare these two results with the Ndiff.The comparison can be done very easily just by using the command ndiff [filename.xml filename2.xml]
The above image illustrates the differences between these two scans that we have conducted on the same host.The plus sign (+) highlights the differences of the second file in relation with the first while the minus (-) sign indicates the differences of the first file in comparison with the second.Specifically in the example above we can see that the port 135,1111 and 3389 have the plus sign which means that in the second scan these ports were found open while in the first scan these ports were closed.
Alternatively we can use the -v option (verbose mode) which it will display all the output of these two xml files and it will highlight the differences with the plus and minus signs as before.
Ndiff also provides the ability to produce the results in XML output with the –xml option.This option is useful in cases where we want to import the information from Ndiff into a third party tool that uses this format. | <urn:uuid:e6331e20-5dc3-4b3c-ad6c-a17a5cfb76de> | CC-MAIN-2022-40 | https://pentestlab.blog/2012/09/04/ndiff/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00704.warc.gz | en | 0.93848 | 350 | 3.640625 | 4 |
Regular Expressions - (also known as "regex") are special strings representing a pattern to be matched in a search operation and they can be particularly useful in mobile and computer forensics investigations.
One of the ways we allow investigators to find and focus on relevant evidence is by allowing investigators to customize and bring in a unique set of keywords using a substring or with regular expressions. ADF forensic tools also implement regular expression keywords in our trace captures and keyword lists. So why are Regular Expressions different from using regular keywords?
Use Regular Expressions to Pinpoint Relevant Digital Evidence
Regular Expressions are special strings representing a pattern to be matched in a search operation. The rules for matching text are represented with metacharacters, quantifiers, or plain text. They are strings in which “what to match” is defined or written.
Using Regular Expressions to Search Numerical Patterns
There are many uses and possibilities for regular expressions. One use is for random numerical strings, such as
- Credit Card numbers
- Social Security numbers
- Telephone numbers
When a pattern is known, regular expressions can be used to look for that pattern. Knowing the pattern and how credit card numbers are displayed, a regular expression can be used to locate the many possibilities.
Using Regular Expressions to Search Word Patterns
The same is also true for Word Patterns, take for example looking for any files or documents with the LS moniker. There are many names under the LS Studio moniker including LS Model, LS Mag, LS-Magazine, LS girl, LS island, and more. Using a regular expression one string can replace numerous regular keywords.
Get help using Regular Expressions via the ADF Knowledge Base or from the ADF User Guides which also include a RegEx Cheatsheet and there are many third-party YouTube tutorials to help you get started with Regular Expressions generally.
Mobile Device Investigator, Digital Evidence Investigator, Triage-Investigator, Triage-G2, and the ADF PRO products are built on the same intelligent forensic search engine and are designed with rapid scan capabilities. ADF tools focus on automation and ease of use for deployment to both field investigators and lab examiners.
Our focus is collecting forensic artifacts fast, which is why we've been both a pioneer and leader in forensic triage and automated investigations for 15+ years. This technology enables non-technical investigators to deploy triage tools that maintain chain of custody and stay within the search parameters defined by forensic examiners. In short, we enable our users to find and focus on the relevant evidence quickly.
Using regular expressions helps investigators speed fraud, ICAC and human trafficking investigations. | <urn:uuid:1ca4e17b-0fdd-432b-b8d0-551b63be563a> | CC-MAIN-2022-40 | https://www.adfsolutions.com/news/using-regular-expressions-to-speed-digital-forensic-investigations | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00704.warc.gz | en | 0.903606 | 542 | 3.015625 | 3 |
What is network speed?
Network speed refers to how fast data travels. Scenarios that are especially sensitive to network speed are time to load a web page, time to establish a new connection and time to download an app or stream a video.
Why does speed matter?
With access to multi-gigabit network speeds, the possibilities are endless. A faster, high-bandwidth connection will enable a new wave of emerging technologies, like virtual gaming, connected health monitoring, AI assistants, self-driving cars, holodecks and other seamless, visually rich experiences that will revolutionize the way we live, learn, work and play.
What problem are we solving?
According to Intel, the average number of connected devices for every person on Earth will reach 26 by 2022, contributing to a staggering double-digit growth of internet traffic overall.
A significant chunk of this traffic will come from high-resolution video used in bandwidth-sensitive applications. With 10G, we’re making sure that cable networks have more than enough speed and capacity to handle it, without overhauling the hybrid fiber coax infrastructure that’s already in place, keeping costs down and reliability high.
How will we get there?
Wired DOCSIS Technologies
DOCSIS® 3.1 technology, the current industry-wide standard that enables data transfer over cable, gives new meaning to the term “high-speed internet” with speeds that reach up to 10G downstream and 1G upstream. Don’t think you need gigabit speeds? Several specific uses that benefit dramatically from gigabit connectivity include the following:
- Online Gaming: One video game takes a mere three minutes to download
- Streaming Video: You can download 10 high-definition movies in just seven minutes.
But we’re not stopping there. To support applications with higher upstream traffic needs like video conferencing and interactive gaming, we’ve developed DOCSIS® 4.0 technology, an extension that allows symmetrical multi-gigabit data speeds in the downstream and upstream at the same time, over the same wire, effectively doubling network capacity. Efficient re-use of spectrum will eventually allow cable operators to add even more bandwidth in the near future.
Continuing advances in Point-to-Point Coherent Optics and Passive Optical Network (PON) technologies already enable multi-gig speeds in the fiber portion of the Hybrid Fiber Coax network. As with coax, we’re working on more efficient ways of using the miles of fiber we already have, which will allow us to grow network capacity to hundreds of terabits and beyond without spending more than we need to.
Wireless technologies play a key role in the future of connectivity. That’s why we’ve been dedicating a lot of effort to making sure that wireless speeds benefit from the advances in both the coax and fiber portion of the network. Wi-Fi Easy Mesh, Wi-Fi Passpoint and Wi-Fi Vantage technologies are all aimed at improving the quality and performance of wireless connections at home, work and even crowded public spaces where Wi-Fi use is growing. Not to mention Mobile Xhaul (formerly Mobile Backhaul), which enables the use of cable’s wired networks to maximize the performance and spectral efficiency of LTE and future 5G mobile networks. | <urn:uuid:bff4d698-08ec-4b72-89e6-a5ef31408460> | CC-MAIN-2022-40 | https://www.cablelabs.com/10g/speed | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00704.warc.gz | en | 0.904353 | 695 | 2.625 | 3 |
Conquer the Basics of Networking with Keith Bogart's New Course
Introduction to Networking Technologies lays the foundation for anyone wanting to take a first step into the world of Computer Networking.So, you've heard there are jobs available for those familiar with Computer Networking and this has sparked your interest. Look no further, this course is for you!
INE instructor Keith Bogart starts by answering the big question: "What is a computer network?" From there, students will learn a wide variety of terms and definitions. Keith will go through the common networking components (such as routers, switches, and firewalls) and explain what they are and what they do. Students will learn how to count in binary and hexadecimal (which are standard skills for any aspiring Network Engineer).
This course also covers Power Over Ethernet (PoE) and how this technology is helping to push networking devices into places they've never been able to go before!
If you're looking for an introduction to networking technologies, you've come to the right place!
Use your All Access Pass to get started on your Networking journey today! | <urn:uuid:e83b77de-c26b-4103-8e0c-194058004cf1> | CC-MAIN-2022-40 | https://ine.com/blog/conquer-the-basics-of-networking-with-keith-bogarts-new-course | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00104.warc.gz | en | 0.936608 | 263 | 3.109375 | 3 |
What is Ransomware?
Ransomware is a form of malware that has malicious technology pushing organizations to the edge by directly monetizing the threat. After all, most crime is directly attributable to some monetary gain.
WannaCry, SambaCry, CryptoLocker, Petya and Locky are some of the more common names of ransomware that have become mainstream news, and even teams outside of security are very painfully aware of the threats.
While there is no shortage of blogs, articles, and vendor solutions outlining best practices to mitigate the threats of ransomware or to block the threat all together, there is truly no magic bullet. If there was, wouldn’t we all already own it and the manufacturer be the most popular vendor?
The fact is, each best practice helps for some ransomware, and other technology, processes, and education with others. It is a family and attack vector story. Each one is different and the defense for each requires adaptation.
Therefore, consider these five recommendations that cover all of the families of ransomware (to date). If you can do these five well, you can mitigate the vast majority of risk from these modern threats:
1. Education, Training and Measurement
The average user may not be able to tell the difference between a regular email, phishing, or spear phishing attack. They do, however, understand that if you click on the wrong thing, you may lose all your work and files or infect your computer.
If you can translate the threat of ransomware into terms the average user can remember, then the human element of social engineering can have some definable mitigation strategy.
The vast majority of ransomware comes via phishing attacks and the training needs to cover the threat, identification of phishing emails, and the hard lesson of what to click on and when not to open a file.
A simple phone call can verify if the email is legitimate and we need to instruct team members how to verify the source before continuing. It is not hard to do, just like looking both ways before crossing the street, but we need to teach all users about safe computing practices. And, for most organizations, penetration testing with phishing samples is recommended to measure the success of your training initiatives.
2. Secure and Verifiable Backups
The worst case scenario is you do become infected with ransomware. If you follow law enforcement's recommendations, you should not pay the fine.
So how do you recover? Secure Backups. While this recommendation is not preventative, it is the only one that can help you when all else fails.
All data should be backed up, and most important secured such that a ransomware infection can not compromise the backup via mapped drives or network shares. The backup should also be tested on a periodic basis to ensure it can restore all files in an uninfected state.
A common mistake for organizations however, is to attempt a restoration before the ransomware infestation is cleared. While some anti-virus solutions can remove the malware, I always recommend rebuilding or re-imaging the host(s).
There is always a chance the threat was more sophisticated then the endpoint security solution can detect and resolve, and that a persistent threat may be present for a future attack. A complete reload is the only way to be moderately sure that the issue has been resolved.
If the infection is bad enough and found its way to a domain controller, you should strongly consider reloading the entire environment. It is the only way to be sure.
3. Secure Macros
Some of the newer ransomware is taking cues from older malware that leverages Microsoft Office and other application macros. This one isn’t easy to resolve, because many of our spreadsheets and documents depend on macros to satisfy business and functional requirements.
For example, a recent addition to the long list of ransomware, “PowerWare”, comes in typically through a phishing email and contains an infected Word attachment. The document contains a malicious macro, which then calls a PowerShell script, which carries out the payload.
This email is scary because Word and PowerShell are very common and approved applications at almost every organization. Therefore, they represent a trusted attack vector for ransomware.
In newer versions of Microsoft Office, they do contain a setting to drastically reduce the possibility of this happening however. The setting, ‘Disable all macros except digitally signed macros’, found within the Trust Center settings will do just that, prevent a macro without a valid certificate authority from executing. This provides secure granularity to enable macros verses the ‘Disable all macros’ setting.
Unfortunately, you may not be able to enable this setting since not all macros your business requires may be signed, or otherwise the certificate for them may be expired.
Wherever possible, insist any vendor that provides software containing macros sign them and establish a process internally to sign macros so this setting can be properly enabled for everyone.
4. Patch and Update Frequently
As if the thought of an angler fish is frightening enough, an exploit kit sharing the same name targets older versions of Flash and Silverlight. According to the Verizon Data Breach Report, 99% of attacks target known vulnerabilities. Even though this specific vulnerability has been patched, many organizations do not patch third party applications regularly — let alone the operating system itself (think WannaCry).
The payload is another version of ransomware. Maintaining software to their most recent versions is nothing new, but we continue to see outdated, and sometimes years outdated, software in production environments. It is important to have a regular schedule to assess your environment for outdated or vulnerable software, and have a tested process to remediate any findings.
These are security basics and if your organization is not doing it well, it is an easy to solve problem and see some tangible threat reduction results. This includes keeping endpoint protection technology and local anti-virus up to date as well.
Businesses still rely on this for a first line of defense when education fails and the ransomware has been identified (and prevented) before the infection. Basically, if it can be updated to a more secure version, it should be, and as frequently as technically and business friendly way as possible.
5. Remove Administrator Rights
Ransomware spreads by leveraging the user’s privileges to infect files that are within scope. If the user only has standard user rights, the only files visible are the ones they may have local or via a network share.
While the scope of this may be large, it can be much worse if the user actually has administrator privileges. Then, potentially every file visible to an administrator is in scope and therefore the entire environment is potentially susceptible to an infection. This assumes however that the ransomware can execute as a standard user.
The fact of the matter is that most ransomware requires administrator privileges just to launch. Macro-based ransomware is one notable exception in addition to ransomware that leverages vulnerabilities like WannaCry.
If you reduce a user's privilege to standard user, ransomware that tries to install a persistent presence is generally thwarted because it does not have the privileges to install files, drivers, or even access the registry unless it leverages an exploit to escalate privileges. This is a sound mitigation strategy for the vast majority of malware that needs to own a system in order to begin infecting files.
If this strategy is bundled with application control and least privilege technology, only a few forms of ransomware (like WannaCry or macro based) cannot be prevented. This proves that to successfully preventing a ransomware attack requires a blended approach from the removal of administrative rights to handling the edge cases that leverage social engineering, macros, and vulnerabilities and their corresponding exploits.
As you can see from the recommendations, the onus is on every organization and security professional to take the necessary steps to prevent ransomware and other malicious software from threatening the network. There is no magic button, no simple tool, nor any one strategy that can stop this modern threat. If you can follow these five basic security recommendations, your organization can greatly minimize the threat.
For more information on how BeyondTrust solutions can help prevent ransomware, request a personalized demo.
Editors note: this article was originally posted on SecureWorld.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook. | <urn:uuid:32df878f-a3a1-446e-a2aa-79b025af1cd8> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/ransomware-5-prevention-strategies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00104.warc.gz | en | 0.933733 | 1,882 | 2.75 | 3 |
Continuous authentication is a method for authenticating users and granting access to corporate resources. It’s based on the level of risk and contextual information about the user, such as role, location, and type of device.
Unlike traditional authentication methods, this mechanism is enforced from login through the end of the user session.
Explore additional continuous authentication topics:
Continuous authentication works by assessing user behavior patterns on an ongoing basis. Unlike traditional authentication, which evaluates users just once at login, continuous authentication considers changing risk factors such as location, device posture, and other behavioral data.
Continuous authentication provides security for hybrid workforces by allowing authentication to the corporate network while restricting access if suspicious activity is detected.
Authentication methods can be categorized according to different factors. Here are some common authentication methods:
Ready to learn about the latest zero trust network access security solutions? See how vendors stack up in the guide from Gartner.
Continuous user authentication goes beyond traditional authentication methods to take security to the next level. Authentication scores are continuously assessed based on factors like device posture and location, which help indicate when suspicious activity or attempts at unauthorized access are taking place. If the authentication score doesn’t show a sufficient confidence level, the system requests another type of authentication. You can set different confidence scores according to the type of action or resource involved.
Adaptive authentication allows scanning of end user devices both before and throughout a user session to corporate applications. Based on location, device posture assessment, or user risk score, an admin can define how a user is authenticated and authorized to access their apps. With adaptive authentication, these risk factors are evaluated continuously so admins can enforce (and adapt) policies as needed.
Risk-based authentication uses AI to gain a real-time view of the context of any login. The solution responds to a user's request for access by analyzing factors such as the type of device, location, the network used, time of log-in, and the sensitivity of the requested resources to make a risk assessment for the user—a risk score. If the request doesn’t meet the requirement, the system will ask for more information. The additional information may include a temporary code, a security question, biometric data, or codes sent to a smartphone.
What are the use cases where organizations may need continuous authentication? Here are some potential areas continuous authentication can help address:
Attack vectors in hybrid and remote workforces: Hybrid and remote work environments have increased cybersecurity risks. Whether users bring their own BYOD devices or use work laptops, it can be challenging for IT departments to maintain the security of unmanaged devices. Poorly protected networks can allow attackers to access the system, causing data leaks and intrusions. Employees using unsecured or weak Wi-Fi networks can be infected by malware or botnets.
Continuous authentication solutions prevent unauthorized users from accessing the system by detecting access requests from non-secure networks or devices.
Insecure passwords for remote employees’ accounts: Allowing employees to choose passwords for work-related accounts may seem convenient, but it can also create vulnerabilities. Allowing employees to use inadequate passwords, recycled passwords, or passwords shared with coworkers is a risky practice common in organizations. With most data breaches leveraging compromised credentials, securing passwords is critical.
By analyzing the entire context surrounding the user access request—and not only validating the password—continuous authentication solutions provide a more secure alternative.
A good user experience increases productivity and improves workflows. However, when users need to log in again every time they switch applications, the result is often lowered productivity. With continuous authentication, employees can log in once and gain access to all their normal applications and resources. Behavioral continuous authentication not only discourages attackers, imposters, and bots, but it does so without affecting the user experience.
In several industries, such as finance, continuous authentication is used to prevent fraud. The system collects data from different components of a customer’s session or interaction with mobile devices, such as swipe patterns or keystrokes. This information is used to develop a user profile. When the system finds a deviation from this pattern, it alerts or requests further user identity verification.
Continuous authentication enables the profiles to work with the bank’s risk solution. This integration helps determine the most accurate risk score to detect fraud. The advantage of continuous risk-based authentication is that it allows security teams to match the risk to the transaction requested. When combined, the authentication system and anti-fraud technology can expand the security coverage over a more extensive attack surface.
Citrix provides leading solutions based on the zero trust approach. With Citrix Secure Private Access, organizations can provide secure access to applications— without compromising productivity. This solution delivers adaptive authentication and SSO so your hybrid workforce can securely access applications. | <urn:uuid:1b31ba20-5e53-4d4a-926c-04773e166a3b> | CC-MAIN-2022-40 | https://www.citrix.com/en-my/solutions/secure-access/what-is-continuous-authentication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00104.warc.gz | en | 0.909526 | 985 | 2.9375 | 3 |
At its heart, predicting the supply of water is a math problem—an extremely complicated math problem.
Algorithms are created to gauge the effects of melting snow, precipitation, daily temperatures and the flow of rivers and streams. The payoff for all these dry formulas is a set of predictive models that can help municipalities ensure that folks will have enough of the wet stuff to drink, bathe, wash their cars and water their lawns.
Lording over these complicated calculations is the responsibility of the Department of Agricultures Natural Resources Conservation Services. NRCS, in Portland, Ore., issues monthly water-supply forecasts for more than 750 places in the western United States. The compute-intensive job requires some 250 developers located across the country, according to Frank Geter, national modeling specialist at NRCS. It also, Geter points out, requires “a robust, collaborative software development system.”
Research Scientist Olaf David, from Colorado State University, has been working with the USDAs Agricultural Resource Services in Fort Collins, Colo., for the past two years to develop such a system to be used by NRCS and a host of other USDA agencies. Early in the process, it became clear that the entire USDA research community needed decentralized, network-dispersed, team-oriented code development and project management.
As it stood, ARS had more than 100 models for a variety of purposes that had been developed on a case-by-case basis, using whatever technology was available at the time. That disjointed system was proving difficult and expensive to maintain, and efforts to edit and improve the models were complicated. David insisted on a new object-modeling system that supported Web-based team communication, issue tracking, user access permissions, collaborative code and document management, while still supporting USDAs existing version control systems. And it had to be easy to install and maintain. David and his team chose a platform consisting of open-source development tools from Sun Microsystems, Intland Software and Collab_Net.
“We standardized on [Suns] NetBeans-based modeling platform,” David said. “We chose NetBeans, [Intlands] CodeBeamer and Subversion [from Collab_net] because its the only integrated collaboration solution that could handle development and deployment of simulation models in order to support our modeling projects.”
In 2004, in cooperation with the Environmental Protection Agency and the U.S. Geological Survey, USDA began prototyping the new system, which went into production in 2005. According to David, the resulting platform enables the kind of collaboration critical at ARS, which often has up to 300 simultaneous software development projects in the works. Through the integrated infrastructure, NetBeans developers are able to collaborate, share knowledge and work effectively as a team from different research locations.
“Its important to note that its not just developers working on code,” David said. “We also have data analysts, others doing geospatial processing, fixing the parameters of the models based on data sets such as elevation and vegetation layers. And some are pure scientists that have no clue about coding. They just want to get the process right. The system has to support them all.”
Davids team supports all those efforts from USDAs development and data center hub known as the Collaborative Software Development Laboratory, or CoLab, in Fort Collins. “Through access to CoLab, people can now develop, and we now collaborate with them providing documentation, testing source code and delivering the source code in a way that lets them access and download from wherever they are.”
CoLab currently supports more than 700 registered users from 15 different institutions accessing a 20GB code repository and some 4GB of documents. Within CoLab, users can store documentation, browse and search source code, and set up automatic builds to compile, test and run models. Since CoLab went live, users have seen significant decreases in project delivery times, better project management and an overall improvement in the quality of source code, David said. The modularity and reusability of code makes the NetBeans platform particularly well-suited to the development work being done by USDA and its agencies, said Jeet Kaul, vice president of developer products and programs at Sun, in Santa Clara, Calif.
“The platform provides the services common to desktop applications, such as window and menu management, storage, and so forth,” Kaul said. “These tedious aspects of writing an application now come for free. The result is that developers get to concentrate on the business logic that implements what the application is actually supposed to do, rather than spend the time rewriting what almost every application needs.”
With the Java and NetBeans development environment established at CoLab, David said he can focus his attention to specific projects, such as the water-supply forecasting models at the NRCS resource center. Researchers and analysts at NRCS are working to port a valuable, if dated, Fortran-based precipitation runoff model to aid the agencys short-term water analysis and forecasting. NetBeans is proving valuable in building the foundation for an updated modeling platform for NRCS, according to David.
“The older model was written in the 1980s sometime,” David said of the compute-intensive forecasting tool. “The challenge was to bring it into Java, then run it in a modeling framework that allows you to be flexible with different approaches. The framework we settled on is based on Java and NetBeans. The whole objective for us was to take this model and have it available in Java and run it in batch mode across a number of geographic areas.”
Part of the projects charter was to make the developers work future-proof as well. “Because the NetBeans platform is based on standards and reusable components, pure Java applications based on it will work on any platform that supports Java 2 Standard Edition, including Windows, Linux, Mac OS X, Solaris, HP-UX, OpenVMS, OS/2 and others,” said Suns Kaul. “The world of software is constantly changing. One way to be sure that what you write today will stand the test of time is to use standards and write cross-platform applications.
“That is what the NetBeans platform is for. Applications based on the NetBeans platform do not require proprietary binary libraries or components—just a Java 2 run-time environment on the target platform,” Kaul said. | <urn:uuid:4e95b55b-54cc-4a71-9c5c-f82ba068b791> | CC-MAIN-2022-40 | https://www.eweek.com/servers/usda-keeps-up-with-the-flow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00104.warc.gz | en | 0.947613 | 1,347 | 3.03125 | 3 |
IoT Trustworthiness States
Confidence that an IoT system will operate in conformance with requirements results from assurance that several characteristics of the system are compliant with these requirements despite environmental disturbances, human errors, system faults and attacks. These characteristics – security, safety, reliability, resilience and privacy – have been identified by ISO/IEC (JTC SC41), National Institute of Standards and Technology (NIST) and the Industrial Internet Consortium (IIC) (Industrial Internet Security Framework (IISF), Section 3 as defining trustworthiness of a system. These characteristics manifest themselves in operational, organizational, commercial, budgetary, architectural and security areas.
An IoT system is trustworthy if it meets the minimum requirements for security, safety, reliability, resilience and privacy, as defined by laws, regulations, standards and industry best-practices. The OSHA 29 CFR 1910 is an example of such regulation.
In general, trustworthiness has three milestone states:
This is the actual “trustworthiness” status of an IoT system, based on the system as it is currently designed, implemented and operating:
- Current state of safety processes
- Current levels of reliability and resilience
- Current state of data protection and security, as well as data privacy controls
The Current State evolves over time as the methods and processes put in place to address the trustworthiness requirements take effect and as factors such as system and human errors, lapses, cyberattacks, malicious activities and external influences begin to negatively impact the level of trustworthiness of the system.
This is a non-negotiable level of trustworthiness mandated by external authorities and parties; example, legal, regulatory and standards bodies, as well as industry best practices.
- To determine the Minimum State level, it will be important to assess applicable laws, regulations, best practices and standards, and evaluate their impact
- In situations where these requirements may conflict with each other, the organization’s Risk Management and Legal teams may need to be involved to provide opinions and guidance regarding the course of action.
The IIC Industrial Internet Security Framework discusses some of the legal and regulatory requirements as they relate to Information Technology (IT) and Operational Technology (OT). Another example is the OSHA 29 CFR 1910 which covers occupational safety and health standards.
In addition to the above, requirements can have jurisdictional implications and in some cases actually boundaries (Data Residency). In these cases, the methods and processes implemented to empower the trustworthiness of the IoT system must have jurisdictional variations.
The EU General Data Protection Directive (GDPR) data privacy law came into effect on May 25th, 2018. It applies to Personal Data created and consumed within the EU jurisdictions as well as Personal Data belonging to EU residents anywhere in the world. The law imposes a wide range of restrictions on organizations (Data Controllers and Data Processors) that handle personal data. Personal data may be produced and consumed by an IoT system. Therefore the IoT Trustworthiness calculus must take into account the restrictions imposed by this law.
Other privacy law examples that apply within specific jurisdictions include the California Consumer Privacy Act of 2018 (CCPA) and the Personal Information Protection and Electronic Documents Act in Canada.
This third state represents trustworthiness levels that exceed the Minimum requirements, based on additional internally-defined and self-imposed drivers and objectives (business and technical):
More about Industrial IoT Law and Regulatory Aspects | <urn:uuid:ef34f828-664a-4c48-810d-caaba58164dc> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/connected-industry/iot-trustworthiness-states/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00104.warc.gz | en | 0.924217 | 701 | 2.859375 | 3 |
There are several options for developing a message broker with full duplex functionalities and various supporting features. Some of these options are the use of a raw TCP socket, a raw UDP socket, AMQP and CoAP. Most of these alternatives have more limitations and complications than benefits, especially when compared to MQTT. This is where MQTT becomes the most appropriate, efficient and convenient choice, especially when building our own Internet of Things platform.
Remember that all of these protocols can coexist, and we could deploy them on the same cloud instance, if necessary. This means that, in the future, if you decide to use AMQP as well as MQTT, it’s possible to integrate some or all of them. More importantly, we can link these channels with an additional plugin program so that there’s a seamless communication from an application’s and device’s perspective.
Using an MQTT Broker
Fundamentally, MQTT is an asynchronous protocol and thus enables duplex communication with a lightweight burden on systems. It allows systems to run on low bandwidth and low power. Conversely, HTTP and similar protocols require relatively high bandwidth and power and are request-response in nature, which means that the client must always initiate communication.
In places where you want either party (server or client) to initiate communication, MQTT is the best choice. Moreover, if systems need to run on low data consumption, especially on batteries, for a long period, it’s prudent to use MQTT. If the device needs to send or receive data frequently and at random, then MQTT also makes sense because it reduces a significant HTTP overhead.
If bandwidth and power is not a concern, then HTTP may be a better choice. It could also be a better choice when data sending or receiving frequency isn’t high, which can block the resources sooner in the process.
Why and When to Use MQTT For IoT Messaging
In an application where live control or monitoring is required, MQTT is an obvious choice because it provides duplex, two-way communication abilities with the least amount of overhead.
You must be mindful of the fact that the workload of an MQTT-based the system can grow parabolically, which means that for each device added to the MQTT speaking network that has n devices in total, the load on the system becomes n squared n*n. The figure below explains this concept graphically.
MQTT-based platform loads increase by n-squared. For example, let’s assume an extreme scenario where there are two clients in which each subscribes to all possible topics. When a client publishes a message, the broker needs to receive a message and another client needs to receive the message too. This means one message sent could result in two transmissions. The same goes for the other client, making it four messages in total for a two-client system.
For a three-client system, this number becomes nine messages in total, (i.e. three messages per client). Simply having 10 devices connected means that the message broker should be capable of handling 10*10 (i.e. 100 messages, and so forth).
When the number of MQTT clients starts to grow, the load on the message broker, overall system and platform will grow almost exponentially.
Always keep this in mind as you scale any IoT platform that’s based on MQTT in the later stages or add more devices to it.
Written by Anand Tamboli and based upon his book, Build Your Own IoT Platform. | <urn:uuid:bc771014-d3f8-4404-b3ce-bb7606e70d52> | CC-MAIN-2022-40 | https://www.iotforall.com/mqtt-broker-iot-scalability | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00104.warc.gz | en | 0.927194 | 746 | 2.65625 | 3 |
Unusual Clandestine Data Centres
Everyone knows what the cloud is, but does everybody know where the cloud is? We try to answer that as we look at some of the most unusual data centre locations in the world.
Under the Eyes of a Deity
Deep beneath the famous Uspenski Cathedral in the heart of Helsinki lies a converted World War II bomb shelter, which sees an unlikely fusion of data storage and green technology.
Estimates suggest that in a typical data centre only 40 percent of energy use is for computing, with the remainder being used for cooling down the servers. The problem is so serious that data centres account for as much as 30 percent of a corporation’s energy bills and 1 percent of energy usage worldwide.
Finnish IT company Academica designed the 2MW underground data centre to try and address this problem. Rather than using a traditional means of power to try and cool the servers, they use pumped seawater from the nearby Baltic Sea. As the servers are cooled the water is heated, and this heated water is then used to provide warmth for 500 local homes, before being recycled back into the system.
The technology itself is not new, but there are no other projects in the world that operate on this scale. The centre now saves Academica a remarkable $235,000 a year in energy costs and is prompting other large data centre providers to follow in their footsteps, with Google now also operating two centres that run on recycled water.
In a Disused Coal Mine
While Academica’s $235,000 per year saving is impressive, it pales in to insignificance when compared with a $3,000,000 per year saving by Sun Microsystems.
The former cloud-computing giant lowered 10,000 of its self-contained Blackbox data centres into a 100 metre deep coal mine located in the Chubu region of Japan’s Honshu Island.
With groundwater used as the coolant and a constant site temperature of 15 degrees Celsius, no air-conditioning is needed outside the Containers. This significantly reduces the energy required when compared with the surface-level Blackbox containers.
At the time, Sun stated that added benefits include security against unauthorised entry and terrorist attacks, and that have designed all the units to capable of withstanding 6.7 magnitude earthquake.
An Independent World War II Sea Fort
Few locations on earth have a story quite as unique as that of Sealand. Just six miles of the coast of England, the self-declared sovereign state has seen a hostile takeover, rebel Government, and hostage crisis since coming into existence in September 1967.
The micronation is now home to HavenCo, a self-styled data haven that has no copyright or intellectual property on data that it hosts.
HavenCo was founded in the year 2000, but ceased operating between 2008 and 2012 as a result of a legal dispute over project financing. Since its rebirth, the company offers proxies and VPNs using European and American servers, whilst storing hard data on servers on Sealand itself. The only restrictions on hosted data are child pornography, spamming and malicious hacking.
(Infographic Source: http://www.whoishostingthis.com)
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:0ef60df6-e803-4ee6-bfa7-6a3f33d542b0> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/02/unusual-clandestine-cloud-data-centre-service-locations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00104.warc.gz | en | 0.94636 | 743 | 2.609375 | 3 |
With plans for between 400 and 500 turbines, the A$2 billion (US$1.8 billion) project in Australia’s New South Wales (NSW) state could deliver 1,000 megawatts of clean renewable energy, providing enough power for about 400,000 homes and roughly 4.5 percent of the region’s electricity consumption in a typical year, the company said. That, in turn, could reduce Australia’s greenhouse gas emissions by 3 to 3.5 million tons of carbon dioxide each year, it added.
If built today, the project would be the largest in the world, Epuron said. Construction is due to begin in stages in late 2009, beginning with 150 turbines in the southern end of the Mundi Mundi and Robe Ranges in western NSW. Epuron will soon embark on community consultations and government approval processes, it said.
“Our wind-monitoring equipment north of Silverton in western NSW has revealed an amazing wind resource,” said Martin Poole, executive director of Epuron. “This project will provide jobs and dollars for regional economies and play a significant role in addressing Australia’s climate change challenge.”
Epuron has a wind-monitoring network across NSW on sites identified for their wind development potential. “NSW has an excellent resource on a worldwide basis, competitive with wind farms in better known wind regions like Tasmania and South Australia,” Poole said.
The project is likely to be the first in Australia to use wind energy production forecasting and actively bid into the National Electricity Market, Epuron said.
Wind power has grown rapidly over the past decade, and is now a significant contributor in the power sector in an increasing number of countries, according to the Global Wind Energy Council. By 2020, wind energy could supply as much as 16 percent of the world’s electricity supply, saving 1.5 billion tons of carbon dioxide per year, it said.
“Wind is great because it means zero greenhouse gas emissions,” Tim Ballo, an associate attorney with Earthjustice, told TechNewsWorld. “No matter what we do in terms of reducing demand for electricity, there’s always going to be a certain level of demand that has to be met, and wind is a great way to do it.”
Wind is also the only energy source that does not rely on water, Epuron said — a key feature in times of drought.
The key issue in selecting sites for such wind farms is to find areas with “lots of wind and not many birds,” Ballo added. “From what I know of the Outback, it seems like that would be a good place.”
Larger and Larger
The planned size of this wind farm, though certainly very large, is “not even shocking,” Stefan Noe, president of Midwest Wind Energy, added.
“It probably would be the world’s largest wind farm, but these kinds of projects keep getting larger, even in the United States,” Noe told TechNewsWorld. Midwest Wind has a project planned just within Illinois that would amount to roughly half the size of Epuron’s planned NSW farm, he added. “The industry is moving toward larger-scale projects,” he said, since those allow companies to reap economies of scale. “It seems completely feasible,” he said.
In addition to the obvious environmental benefits, wind farms offer a number of economic advantages as well, Noe said. “These projects can provide enormous economic development benefits to rural communities, with steady lease payments to landowners, new construction jobs, and addition to the property tax base,” he added.
Epuron’s NSW wind farm is expected to create between 50 and 100 new jobs in the operation and maintenance of the facility, the company said. | <urn:uuid:2405d394-e8a1-46be-b59d-044004e1ef5d> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/aussies-to-plant-1-8b-wind-farm-in-outback-59713.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00104.warc.gz | en | 0.95763 | 813 | 3.09375 | 3 |
It is tempting to believe that data, and the management of its quality is something new, brought about by the advent of new regulations such as E-Privacy and the EU GDPR. It is not. Data, its management, and its quality have been around since information was first created: when we started writing things down.
Data Quality Definition
We could go further, talking about what is data quality as a process, making data operational, enabling individuals and organizations to draw insights from the data which will inform their decision-making.
The reason we describe DQ as a process rather than a single item is that it comprises various elements that all contribute to the purpose of making data “fit for purpose”. Sometimes people use the term Data Preparation to refer to these elements, though data prep should be considered separate for now.
What are the dimensions of data quality?
Sitting underneath the umbrella term of Data Management, DQ takes a holistic view of an entire dataset, combining these elements – often called the dimensions of Data Quality – to provide a snapshot of the quality of data held.
Are there gaps in the data and if so, where? Some gaps are worse than others and what is considered a gap depends on the process where the data is used. For example, if the billing department requires both phone number and email address, then no record missing one or the other can be considered complete. You can also measure completeness for any particular column. Profiling your data will uncover these gaps.
Are the postcode records you hold in a valid format? How confident are you that the email and postal address records you hold in your database are capable of receiving? Validity checks verify that the conforms to a particular format, data type, and range of values.
Since data-driven automation is so important nowadays, data has to be valid to be accepted by processes and systems that expect it.
Is new information entering your CRM every day in real-time or are you manually importing it? How often is the data “refreshed”? Timeliness is a crucial dimension because of the increasing need for up-to-date data.
Similar to other dimensions, timeliness is user-defined. One kind of data needs to be available on a quarterly basis for financial reporting. Other data must not be older than 5 minutes for real-time analytics.
Do you have the same customer recorded twice in your data set or data catalog? Uniqueness measures how much duplicate data there is in a given data set, either within any particular column or as whole records. For example, in the orders table, each order should have just one row. If, on the other hand, you encounter two records with the same order id, you have a duplicate. How did it get there? Someone could have mistyped the order number. This brings us to the next dimension: accuracy.
Perhaps the most important dimension, accuracy refers to the number of errors in the data. In other words, it measures to what extent recorded data represents the truth. Accuracy is tricky because data might be valid, timely, unique, complete, but inaccurate.
100% accuracy is an aspirational goal for many data managers, and once achieved, the principles of data governance can be combined with DQ to ensure the data does not degrade and become inaccurate ever again.
Do you have conflicting information about the same customer in two different systems? That means the data is inconsistent, which might lead to inconsistent reporting and poor customer service.
The Importance of Data Quality and its Value
Of course, everyone wants to know "why is data quality important?" However, we believe an even more important dimension to data needs to be discussed here: value.
Our definition of data quality's value is this: what are the business, risk, and financial values assigned to any piece of information? In this manner, data analysts and other practitioners of data management can quickly assign priorities to different data sources or specific data domains when they do data quality projects.
We recommend using a tool to assign literal values to your data such as:
Business - how valuable is, for instance, Employee salary data to marketing? Chances are, it has a much higher business value to the HR department, whereas customer emails are more useful for marketing.
Risk - are you holding Personally Identifiable Information (PII)? This means you could be exposed to the risk of GDPR fines if this data is not accurately protected to ensure the individual’s privacy.
Financial - eCommerce companies are the best example of the financial value of data: typically email address and credit card numbers are all that is needed in order to transact with customers and therefore profiling the data, keeping it of high quality, and reporting it over time can help eCommerce businesses understand the average value of customers and accurate email addresses.
As you can see from these examples, Data Quality tools can quickly become mission-critical for your business, depending on the quality of the data you hold that you need to perform day-to-day operations. So, why is data quality important? Because it adds value.
What are the business costs and risks of poor data quality?
Data quality maturity curves are becoming more prevalent, and organizations can quickly ascertain whether they’re reactive or optimized and governed in their approach to data management.
An example of an organization that is immature in its capture and management of data is one that does not use validation fields or uses free-form capture fields on the contact forms of its website, allowing anyone to enter whatever they like.
Bad data should not be taken lightly as it poses significant risks and business costs. Below are several examples:
- Wasted marketing budget: if your organization is sending physical mail to your customers and marketing leads, but those addresses are out of date or invalid, you’ll be wasting precious marketing dollars and time.
- Non-compliant data: regulations such as GDPR require a certain standard (Article 5) of how to maintain Data Quality in relation to the accuracy and integrity of data. If an organization’s data is found to be non-compliant with data-driven regulations such as the EU General Data Protection Regulation (GDPR) they can be fined up to 20 million euros or 4% of annual turnover - whatever is higher!
- Hindered IT modernization projects: when data moves from source to target system, without correct mapping and data quality tools, old dirty data can wreak havoc on the new system.
- Poor customer experience: If contact information is of poor quality, you cannot provide customers with a tailored customer experience and serve them via their preferred channel.
- Fines: In regulated industries such as healthcare and banking, enterprises risk miscalculating key statistics for regulatory reports and getting fined.
- Unreliable analytics and machine learning: Inaccurate or invalid data will provide inaccurate analytics and unreliable machine learning models.
- Strategic operational mistakes: Building a warehouse at the wrong location, not catching fraud, producing the wrong alloy are all examples of using bad data for business decision-making.
And yes, you can put a number on data quality.
What are the benefits of better data quality?
There are so many benefits to improving the quality of your information that it is impossible to list them all out, but some of the common ones include:
- Increased return on investment for marketing activity thanks to improved email and postal deliverability and more reliable targeting
- Less time spent fixing dirty data. This will save you $1-10 per record.
- Increased ability to personalize your service or product offerings
- Improved, faster decision-making
- Compliance with new and existing regulations and the creation of a consumer-centric data-driven culture
And many more. Ultimately, your business is unique, and therefore how you benefit from improved DQ is also unique.
Giving Voice to the Business Benefits of Data QualityWatch webinar
On demand webinar.
What are must-have features to ensure data quality?
If you'd like to learn about all the essential capabilities of data quality, you can read the full article here.
Before you do any data quality checks, it’s important to examine your data at its source to better interpret and understand it. Data profiling does this faster and more efficiently than via SQL queries. It helps with defining what transformations are necessary for the data and what problems to track in the future.
Data cleansing and transformation
Very often you need to transform data to improve its quality. This includes:
- Format standardization
- Parsing data and breaking it down into separate attributes (e.g., full name into first name and last name)
- Data enrichment: bringing additional data from external sources
- Data deduplication: remove duplicates from data
- Data masking: sometimes you need to obfuscate data for security reasons
It’s important to note that these processes need to happen automatically to any new data before it travels to other systems and makes its way to data analysts and is used for business decision making.
That being said, it's even more beneficial and smart to establish processes that validate and “treat data” before it enters any IT system. This is called a data quality firewall. An example of this is an algorithm that checks data entered into a web form against a required format and alerts the user to fix it, such as email addresses or birth dates. But DQ firewalls can be embedded into complex enterprise applications as well.
Monitoring and reporting
Peter Drucker said it best: “If you can’t measure it, you can’t improve it.” It’s as valid data quality as it is for business in general. Tracking changes and improvements to data over time is crucial and is usually done through data quality dashboards.
First, it shows you whether you are moving in the right direction, i.e., whether the data quality metrics that you have defined are improving or not. Second, monitoring data quality helps catch unexpected influxes of bad data and track it to its source. And third, it helps with tracking compliance with regulatory requirements and more.
If you want to know more, here are some frequently asked questions about data quality.
Can the Data Catalog and Data Quality work together?
Yes! Monitoring your data quality is much more efficient and accessible when integrating it with your data catalog. More specifically, you can automate data quality workflows using the metadata from the data catalog. Here are other ways the data catalog and data quality benefit each other:
- Automating data quality monitoring
- Improving data discovery
- Streamlining on-demand DQ evaluation
- Simplifying data preparation
- Helping discover root causes of quality issues
What is a real-world example of bad data quality affecting analytics?
One of the most common places we find data quality is during census analysis. Many censuses are taken in paper and digital format, leading to quality discrepancies like unreadable inputs and duplicate entries for the same applicant. Most census data undergoes data profiling, standardization, enrichment, matching and consolidation, and relationship discovery before it’s considered fit for analysis.
How to get started with data quality?
Data quality management can seem like a bit of a daunting task. In our opinion, the first steps of any data quality improvement are:
- Determine your current goals and scope (help with a specific business problem dependent on data or focus on a specific critical data element).
- Profile your data.
- Fix the most urgent issues as soon as possible
- Come up with metrics and methods for measuring its quality.
- Monitor data quality problems.
- Scale your program to other teams, departments, source systems, and critical data elements.
Following this process will ensure you find the relevant strategy for your organization and won’t embark on a task that is overwhelming or inadequate.
How important is data quality for successful AI implementations?
Data quality is essential for successful AI implementations. Spending too much time preparing data is one of the main reasons AI is so expensive and time-consuming. You can ensure more successful AI implementations if you:
- Profile your data
- Perform DQ evaluations
- Have regular DQ monitoring
Otherwise, you’ll be building machine learning models on the wrong sets, inevitably leading to errors or more work for your AI architects.
Where is Data Quality headed in the future?
Data quality is undoubtedly here to stay, but what kind of innovations can we expect? Well, you can expect the following improvements in the next few years:
- Further automation will enable greater adoption of new architectures like the data fabric and data mesh.
- The term is growing to encompass other aspects of data management like reference and master data management.
- Data being deliverable to any user at the company regardless of skillset.
- Data quality tools are becoming singular solutions instead of fragmented features that can cause conflict.
- More systems than people are consuming data.
- Much more!
If you’d like to learn more about the future of data quality and how we got here, you can find it all here.
Improve DQ with Ataccama
An important first step is to profile your data to understand just what state it is in. There are several data management tools that you can use to do this, many of which offer free versions.
Get started with data quality todayDownload data profiler
Online or desktop | <urn:uuid:74c8eab5-9f37-445d-a272-b7add8e7de41> | CC-MAIN-2022-40 | https://www.ataccama.com/blog/what-is-data-quality-why-is-it-important?utm_campaign=Software%20Folder%20Updates&utm_medium=email&utm_source=Revue%20newsletter | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00305.warc.gz | en | 0.924043 | 2,802 | 2.59375 | 3 |
A coal industry group criticized a recent study showcasing the health benefits of forthcoming power plant emissions limits.
The analysis by researchers from Syracuse University and Harvard University showed that potential restrictions on carbon dioxide emissions from power plants would stave off 3,500 premature deaths annually. The report also indicated emissions limits could prevent more than 1,000 heart attacks and other hospitalizations attributed to air pollution-related illnesses.
The American Coalition for Clean Coal Electricity, however, said that the study ignored the economic consequences of those regulations.
“This is more than just an academic exercise to the tens of millions of Americans who depend on affordable, reliable electricity to power their homes and places of work every day,” said ACCCE spokeswoman Laura Sheehan.
The U.S. Environmental Protection Agency is expected to issue a final determination on curbing carbon dioxide emissions from power plants this summer. The rules will vary according to state, but the agency hopes to ultimately reduce those emissions by 30 percent between 2005 and 2030.
Republican critics in Congress, however, are gearing up to fight the proposal and several states are likely to challenge the final ruling in court. The attorneys general from Oklahoma and West Virginia told a Senate panel this week that the EPA proposal would lead to job losses, increased electricity prices and potential power outages.
“We know that taking coal power offline will lead to electricity disruptions including blackouts, brownouts and rationing,” added Sheehan. “These disruptions are not just nuisances; they jeopardize hospital and emergency care, city sanitation systems and regular commerce.”
The emissions plan study, published in the journal Nature Climate Change, examined three potential EPA rules and their likely impact on reducing smog and soot from power plants.
"The bottom line is, the more the standards promote cleaner fuels and energy efficiency, the greater the added health benefits,” said Syracuse engineering professor Charles Driscoll.
The option with the greatest health benefits anticipated preventing between 780 to up to 6,100 deaths nationwide each year. Pennsylvania, Ohio and Texas would see the largest amounts, according to the research. | <urn:uuid:60af0a45-2c02-474f-bb21-0ecaa1e05e76> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/news/13212459/coal-industry-knocks-study-of-epa-power-plant-rule | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00305.warc.gz | en | 0.952682 | 432 | 2.859375 | 3 |
AI that started out simply tackling robotic process automation-type tasks is slowly evolving to take on more traditionally human and creative duties.
The concept of using artificial intelligence to help mitigate dull, repetitive or manpower-intensive jobs within government is nothing new. For example, the Post Office has been using AI to scan, route and track letters and packages for several years now. Other agencies are using similar so-called low-level AI for everything from document processing to managing their payroll. Sometimes called robotic process automation, tasking computers with those kinds of jobs makes a lot of sense, because it’s extremely easy for a computer to accomplish. A human could do the same repetitive type of job as well, but it would take much longer. And the human would likely get tired at some point and thus be more prone to errors and mistakes.
Slowly, however, AI is starting to be asked to perform higher level tasks that are normally undertaken by humans. I recently moderated a discussion with eight of the top government and industry AI experts. All of them predicted that AI would continue to evolve and eventually be able to take on more human roles in government and the world at large.
A great example of this evolution is occurring at the IRS, which recently tasked AI chatbots with answering calls from those seeking to catch up on late tax payments. When the program began at the IRS in March, the bots were only able to provide basic instructions and information to those calling for help. However, over time the AI-powered bots have been given more responsibility and are now able to talk more deeply with callers and help them work out a payment plan. The bots can then send transcripts and a record of the new plan out to the late taxpayer to help get them back on track.
The Patent and Trademark Office is following a similar path, initially fielding AI to help with the incredibly complex system of classification used by the USPTO. With that successful effort behind them, they are now looking at expanding the role of AI to further speed up the patent and trademark process, while keeping humans solidly in the loop as well to keep watch over everything.
The government clearly sees the advantages that AI can bring to agencies, especially as the technology evolves to take on more responsibilities. The National AI Initiative Act became law last year, which stipulates that advancing AI should be a “coordinated program across the entire federal government to accelerate AI research and application for the nation’s economic prosperity and national security.” The website at AI.gov does a good job of tracking all federal AI initiatives, and there are many programs listed, plus lots of recent news in this field.
As with any new technology, there are some concerns. There have been quite a few reports about how institutional bias crept into some AI programs. The White House has acknowledged this danger, and called for the establishment of an AI Bill of Rights through its Office of Science and Technology Policy. That effort is designed to democratize AI as much as possible, letting the public see the state of AI development and hopefully making them feel more comfortable as they begin to increasingly interact with AI in government and the private sector moving forward.
So how good is AI right now?
There are quite a few examples of AIs starting to perform some impressive tasks in government, but reading the proposed AI Bill of Rights got me thinking about how good the technology really is at this point in its development. Should we really be worried that AIs will become too human, too quickly?
Many years ago I played with expert systems on my very first computer, an IBM PC with a 10G hard card—not even a hard drive—256K RAM and processing power, less than a tenth of what my smartphone offers today. And yet, those expert systems, which were designed to mimic a human expert, were still impressive. So long as you were asking the system questions about its database of knowledge, it gave reasonable answers. But it was hardly intelligent. Ask an animal-based expert system a question about sports, and the best case scenario will be that it simply admits that it does not know what you are talking about.
But true AIs are not supposed to be that restricted, especially if they are fed lots of clean data. For example, YouTube is currently filled with music videos where people fed an AI millions of pictures and descriptions, and then asked it to generate images in real time that matched the lyrics of popular songs, basically having it direct a music video. For the most part, those are pretty good. And then someone fed over 400,000 hours of horror movies into an AI, and then asked it to create its own film. The results of that experiment were less than impressive, although quite funny.
So, I decided to try out an AI for myself over the long Labor Day weekend. For this experiment, I eventually settled on the AI Dungeon program from developer Latitude. The program lets players create and interact with AI-generated worlds, creating and experiencing them at the same time. The baseline for the worlds that the AI draws from are thousands of fantasy novels and other related content. There was also input from live people as they created their own worlds.
The AI Dungeon client connects to a supercomputer to get all of its responses. So, there are no limitations on the input like those expert systems on my IBM PC. About the only delay is the time it takes to send a response and get one back from the AI. Originally, the program was available on smartphones, with users limited by how much they could query the AI each session. However, AI Dungeon was recently released on the Steam platform for the PC, with users there getting unlimited access to the AI for a set fee. That is the version I used for this test.
Unlike those old expert systems, this AI is supposed to be able to respond and fill out an interactive story set in almost any world. The following is an excerpt from one of my many adventures. It’s clear that the AI tested could never be mistaken for a real person—so it won't be passing the Turing Test anytime soon—but for the most part, it was able to maintain a fairly convincing world, at least with a lot of help from me. Read on to see the evolution of AI, as well as some of the areas where it struggles.
The Case of the Sabotaged Sci-fi Traffic Light, by Dungeon AI and Me
In this story I play a cop named Officer Stark Silver at some point in the far future. The Dungeon AI platform lets users enter three kinds of different text inputs called Say, Do and Story. Say and Do are used when you want your character to say or do something, while Story is used to change or modify the overall game world you are interacting with. Story can also be used to force the AI to start down a specific path. There are also several fine tuning controls available. For simplicity’s sake, my responses and queries in the transcript will be in normal text without detailing their type, and the AI’s responses to me will be italicized. So, let’s see how well the AI did at presenting a gritty crime story in real time.
You are Officer Stark Silver, a cop living in the futuristic city of Zali. You have a laser pistol and a stunner. You are part of a special unit of police who track down government criticizers. You are at the station and are told that the criticizers are active downtown this morning and have just vandalized a traffic light.
I call my partner, Officer Joe Friday, and see if he wants to ride with me downtown to investigate.
Your partner Joe Friday says hello and plants a big kiss on your face. He asks if you are ready to go.
I ask Joe why in the hell he just kissed me.
Joe Friday laughs and says that we are wasting time.
I say okay and walk to the garage to get to our police vehicle. Once we are both inside, we drive downtown to the vandalized traffic light.
The ride downtown takes only a few minutes. Police are on the scene, blocking the roads all around.
I get out of the car and examine the vandalized traffic light.
You open the box and see a flashing number blink down from 10 to zero. The traffic light goes out. It has been sabotaged!
I check for any forensic evidence or clues about the sabotaged traffic light.
You find a note saying that the next light will be sabotaged in just the same way.
I take the note and hand it to Officer Joe Friday. I ask him to take it to the police lab to do handwriting and DNA analysis while I look around the scene for more clues.
You and Joe Friday drive back to police headquarters. You turn in the note to the lab.
I didn’t want to drive back to the office, but since I am here, I will have lunch with Joe and wait for the results.
You eat until you are full.
Ask the lab about the results.
The results are still processing.
Wait a bit and ask the lab about the results again.
The results are still processing.
Wait until the results are processed. And THEN ask the lab if they can identify the government critics by their DNA or handwriting.
There are several critics identified.
Check the police database to see if any of the identified suspects have a last known address.
All of the government critics live downtown in a small five block radius.
Then why have we not rounded them up already if we know who they are and where they live?
They have never been caught.
I sigh at the police incompetence, but put on street clothes and decide go undercover. I will have Joe Friday drive me near the area where the critics live and drop me off. I will have him wait in the car with his phone while I try to locate the suspects. I begin looking around the neighborhood.
The neighborhood is nice, with many small houses and apartment buildings.
Yes, that’s great, but I look around for people who might be the suspects from this morning.
You see a man walking very quickly with a package under his arm. He is standing by a mailbox.
I casually approach the man and ask him how he is doing.
The man says he is fine, and that he is waiting for a friend. But he looks nervous.
I tell him that I just came from downtown, and saw the traffic light that was vandalized. I tell him that I hate the government too, spitting on the ground for emphasis, and I am looking to do something about it. I tell him that I was told I could join an anti-government group somewhere in this neighborhood. I ask if he knows any of the government critics.
The man says that he hates the government too, but he does not know any of the critics. He says his sister used to have a friend who was one, but not anymore.
I tell him that is too bad, because I work for the city and I could really help a group like that.
He says that I convinced him. He will take me to meet his five friends, who he was waiting for. They are critics. That is why he was so nervous. He leads you up some stairs to a large apartment. The apartment is filled with technology, probably bought with their illegal profits. A woman welcomes you and asks you to sit down on a new couch.
I sit down and tell them that I am glad that I finally found them. I ask them to confirm that they are the group that vandalized the traffic light, because I only want to work with the top government critics, not some terrorist wannabes.
The man says that they did not want to break the light. It was done as a trap!
I ask the man what kind of trap, while slowly reaching under my coat for my pistol. I keep an eye on them.
The trap was for you, sly Starky! We wanted you to join us! So we vandalized the light. We knew you would come.
Why would you want me to come here? You must know that I am a police officer.
We need a leader, someone who can get things done. We want freedom from the government and a new start. You can give us that. We have a plan.
That’s really great. I am so glad that you found me and that your brilliant traffic light trap worked. Can you tell me about your new plan?
Yes! We are going to kidnap the president of the United States.
That is quite a step up from vandalizing a traffic light.
We have a good plan. We will show you.
I tell them that I am very interested in seeing the new plan, but need to use the bathroom first. I ask if I can.
Yes. Please, take off your pants and underwear! Nobody will bother you for a long while.
I tell them that I will not be that long. Once in the bathroom, I close the door and call Joe Friday. I tell him that I have located the suspects, and what apartment I am in. I ask him to bring the police to arrest the critics.
Joe Friday asks you why you need the police.
I remind him that we are cops and I need the police because I am undercover in an apartment filled with a bunch of terrorists! Why do you think? I tell him to get SWAT and get over here right away.
The police arrive quickly.
I hand over the plan to kidnap the president to the other officers and have them arrest all five of the government critics.
As they are being led away, one of the critics, a redheaded woman, yells that you are traitor and says that all they wanted was a fresh start.
I tell them that they can have a fresh start once they get to prison.
All of the critics have been arrested.
I tell my partner, Joe Friday, that we finally caught those terrible government critics, and that I am proud of the job we did protecting the traffic lights of our great city, and the president.
Joe Friday smiles and kisses you on the cheek.
Joe, I really wish you would stop doing that!
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys | <urn:uuid:30ec3ada-3925-475e-a99d-7d9d53c1ee96> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2022/09/artificial-intelligence-continues-evolve-government-and-elsewhere/376811/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00305.warc.gz | en | 0.976585 | 2,977 | 2.609375 | 3 |
What Is A Bonded Internet Connection?
A bonded internet connection is two or more internet connections joined together through a single router. By joining multiple connections together, we can combine their speeds into one super fast connection.
For example, two separate 1Gbps download and upload internet connections could be bonded into a single 2Gbps download and upload connection.
One of the most common reasons businesses use bonded internet is to get a fast connection to a location that doesn’t have the infrastructure in place. Most businesses that are located in a remote area will find there’s very limited access to fast internet connections, and so will have to choose between bonding two connections together or using microwave internet.
Bonded internet is also used by businesses who rely heavily on the internet to function, as it doesn’t only give you faster speeds – it also gives you less chance of downtime (employees sitting around unable to work while you argue with the internet company).
What connections can be bonded together?
Almost all of the modern internet connections can be bonded together, including:
- Any Ethernet service
- Fixed Wireless Link (Microwave Internet)
- And more.
Companies who require ultra-fast, always on internet will usually combine a fibre leased line with a microwave internet connection to achieve speeds of up to 20Gbps.
The major benefit of bonding a leased line with a microwave internet connection is the resilience your business gets from using two completely separate connections.
Leased lines are prone to damage during floods, due to the fact that they are ground-based. They’re also vulnerable to fires and often damaged during construction works. And although that doesn’t happen everyday, it takes a long time to fix when it does.
Bonding two leased lines together will give you faster internet speeds, but you risk them both going down during a natural disaster.
A microwave connection is as fast as a fibre leased line but isn’t at risk during those scenarios, as the connection is delivered over the air – meaning there’s less to go wrong.
By bonding a leased line with microwave internet, you get the best of both worlds – faster internet and less chance of it ever going down.
What happens if a connection goes down?
If one of your connections was to go down, the router would simply use your other working connections and not put traffic through the broken one until it’s been fixed. This ensures no data will be lost, but will give you slower speeds while the internet provider gets it working again.
If you use a fully managed service like MultiConnect+, you’ll have 24/7 monitoring of your bonded connection – ensuring any faults or potential problems get fixed before it ever causes issues for your business.
Is bonded internet the same as load balancing?
Bonding internet connections is similar to load balancing, but has a few important differences.
Bonding joins 2 connections into 1, whereas load balancing uses 1 of the 2 connections and keeps the other as a backup.
You won’t be able to increase your internet’s speed with load balancing, but it can be used to prioritise internet access to certain people or locations if your connection gets too busy.
Do bonded connections use different IP addresses?
Bonded connections can be set up in a way that only ever uses one IP address, no matter what connection is being used. This avoids downtime that would have otherwise been necessary while a dynamic IP refreshes, and is also needed for use with CCTV systems.
Having a fixed IP isn’t always the way businesses want their connections set up, and so it’s important to let the company who’s bonding your connections know if it’s important to your business before they get started.
Need bonded internet for your business?
If your business needs a bonded internet connection, get in touch with our friendly team today. Simply click on the button below, enter your details and we’ll give you a call back shortly.Get a Free Quote | <urn:uuid:94f1fa4e-e197-4043-90af-82b0bea94c87> | CC-MAIN-2022-40 | https://www.apcsolutionsuk.com/what-is-a-bonded-internet-connection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00305.warc.gz | en | 0.946087 | 853 | 2.71875 | 3 |
Variation of Old Cryptographic Attack Affects giants[source: bleepingcomputer]
Three security researchers have discovered a variation to an old cryptographic attack that can be exploited to obtain the private encryption key necessary to decrypt sensitive HTTPS traffic under certain conditions.
Named ROBOT, which stands for Return Of Bleichenbacher’s Oracle Threat, this new attack is a variation of the Bleichenbacher attack on the RSA algorithm discovered almost two decades ago.
The original Bleichenbacher attack
Back in 1998, Daniel Bleichenbacher of Bell Laboratories discovered a bug in how TLS servers operate when server owners choose to encrypt server-client key exchanges with the RSA algorithm.
By default, before a client (browser) and a server start communicating via HTTPS, the client will choose a random session key that it will encrypt with the server’s publicly-advertised key. This encrypted session key is sent to the server, which uses its private key to decrypt the message and save a copy of the session key that it will later use to identify each client.
For more, click here. | <urn:uuid:6fb3acfd-8dcb-44bc-9732-baed4b4ae7db> | CC-MAIN-2022-40 | https://www.cirt.gov.bd/variation-of-19-year-old-cryptographic-attack-affects-facebook-paypalsource-bleepingcomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00305.warc.gz | en | 0.83856 | 225 | 3.03125 | 3 |
A recent report released by GreatHorn reveals that security leaders have witnessed a 25 percent rise in phishing attacks that break through security defenses. In spite of multiple security solutions, approximately half of participants had phishing emails arriving in their in-boxes. The effect has been seen in both the public and private sectors. For example, one of the largest successful phishing attacks to occur in 2019 was that of the Oregon DHS, a breach that compromised over half a million patients and 2.5 million emails.
The spread of phishing attacks is a result of various, interconnected causes, with each one influencing the other in a disastrous domino effect. An oblivious employee who has not been trained well in information security can easily fall victim to any of the following:
- An insecure link embedded in an email requesting classified information once opened
- A Trojan being installed through an insecure email or advertisement, giving the hacker a way into the company's systems
- Sending sensitive information to a spoofed email address which seems to be a reliable source
- An intruder pretends to be a trustworthy person or business partner and requests information over the phone
In order to protect your organization against such attacks, employees need to be empowered with defensive technologies and knowledge. The following is a compilation of tips by a number of security experts on how businesses can defend themselves against phishing attacks.
Implement spam and web filters
Deploying spam and web filters to block or delete malicious websites and viruses automatically strengthens your first line of defense. Using heuristics to identify if an email is malicious or not can be helpful, but may allow more sophisticated malicious emails to slip through.
Invest in anti-phishing and training programs
This can include conducting mock phishing scenarios in which employees can practice and learn how to determine whether an email or link are malicious. Employees should understand the types of potential attacks, what such attacks can do and how to deal with them. While spam and web filters are able to detect phishing attacks, ensuring that employees are well aware of phishing attacks serves as a backup plan, like a two step verification process.
Establish a set of security rules
One of the most common reasons for phishing attacks is employee negligence. By enforcing preventative measures, you can minimize this cause. Some possible rules can include:
- Only browse on secure sites
- Do not open emails and attachments from unknown senders
- Do not send passwords via email or any other form of messaging
- Encrypt all sensitive information before sending
- If you receive an odd email from a reliable source, call them to confirm that they sent the email
Privacy is key
This is more than just protecting personal or sensitive information. It is vital that your organization is aware of every piece of information that it publishes or reveals to the world. This is because hackers now leverage social engineering to research organizations, reading about them, their employees, what role they play and who they may be contacting. A hacker may try to spoof an email address to make it look like an important member of the organization to prompt employees to open their emails without thinking twice. Moreover, employees should protect themselves by ensuring that the answers to their security questions are not available on their social media profiles. Questions like "What school did you go to?" and "What is your mother's maiden name?" should not be considered security questions if their answers are available online. | <urn:uuid:e9c2b3a2-692c-4de0-b77d-457cd764eabf> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2019/8/8/4-crucial-lines-of-defense-against-phishing-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00305.warc.gz | en | 0.951674 | 686 | 2.609375 | 3 |
A couple of weeks ago, I presented at the Arizona CIO/CTO Summit 2018 which was hosted by the Center for Digital Education in association with the Arizona Technology in Education Association (AZTEA).
Because technology is rapidly changing the way we deliver curriculum and how students consume it, my presentation—titled “Where Have All the Teachers Gone?”—addressed the future of machine learning and artificial intelligence in the classroom and whether traditional teaching can co-exist.
My goal here is to summarize my presentation—and the findings of a survey we did—to help educators plan for and incorporate technology into the classroom in a proactive and ethical way instead of simply reacting to it.
Artificial intelligence in the classroom…really?
In one report, experts expect that artificial intelligence in U.S. education will grow by 47.5% from 2017-2021. In terms of dollars, some $400 million was spent on AI for the US education market (K-12, higher education and corporate training) in 2017, but by 2024 that number will explode to $3.4 billion, according to another report. That’s significant.
To prepare for my presentation, I sampled teachers about the use of artificial intelligence in the classroom. While most consider themselves knowledgeable about and use technology in the classroom, none of them mentioned AI. The most common technologies used, according to these teachers, were more traditional technologies that have been “re-engineered” over the years: videos, online applications, computer games, tablets, and smart whiteboards.
Still, most teachers know what artificial intelligence is and believe that it will play a role in education. And even though many are excited about the potential of AI, only a few believe that AI will be very beneficial. Here’s what some of them said:
- “As an educator, I need to be able to evolve and grow my teaching strategies.”
- “Any student interactions with AI will be emotionally impoverished and, if we’re being honest, we don’t really know how this will impact learning.”
- “Can’t replace me.”
- “I would be excited but based on the history of education fumbling the introduction of new technology and pedagogy, I am fearful that we will overwhelmingly misuse AI in schools.”
- “I think AI will help many students, but we will still need teachers and experiences with people.”
Typical use cases for AI in education today
Whether or not teachers are excited, afraid or indifferent to the idea of AI in the classroom, AI is already here. In fact, AI is already in use at many schools, though many teachers may not be aware of it. Here are some common artificial intelligence use cases:
- Fill gaps in learning – AI can help teachers develop personalized learning maps for students and analyze their progress.
- Support students 24/7 – AI can help students grasp difficult concepts or subjects by providing tutoring outside of the classroom.
- Fill gaps in teaching – AI can recognize that students are having trouble with a particular concept and identify opportunities for teachers to teach them differently.
- Teach students differently – AI can help students with learning and other disabilities to overcome challenges with learning.
- Reduce admin time – AI can give teachers more classroom time by taking over time-consuming grading and other administrative tasks.
So AI will be good at many things: helping students grasp difficult concepts and giving teachers more time to attend to students’ needs. But AI may never be good enough to teach children social and emotional intelligence, so we clearly need teachers in the classroom. To rely on one without the other may be cheating kids of a quality educational experience—particularly in a world that requires them to be tech-savvy.
Logicalis and artificial intelligence in education
Instead of reacting to technology, Logicalis can help you plan for and incorporate it in a proactive way that aids students and teachers in the learning process. Our consultative approach, delivered through workshops and assessments, will guide you to solutions that can have real educational impact.
We offer end-to-end cloud, hybrid, or on-premise platforms that incorporate deep learning and machine learning to assist teachers with student support and routine administrative tasks that really allow them to focus on the learning experience.
Mike Trojecki is the Vice President of Internet of Things (IoT) and Analytics at Logicalis, responsible for developing the company’s strategy, partnerships and execution plan around digital technologies. | <urn:uuid:22d1f382-7db7-431b-9d4e-286fb84f7680> | CC-MAIN-2022-40 | https://logicalisinsights.com/2018/10/19/how-teachers-can-co-exist-with-robots-artificial-intelligence-ai-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00305.warc.gz | en | 0.954328 | 939 | 2.6875 | 3 |
Last week’s Google Docs phishing scam went viral on social media and gained even more attention by fooling more tech-savvy users to click on links.
The scam was very sophisticated – the email invites the recipient to access a Google Document from a person in their contact list or someone they might know. Once they click on the link, it directs them to a Google.com URL which looks like a sign-in page, but it actually asks permission to access your Google account. If you agree and click Allow, then the attack extends to all of your email contacts and repeats the cycle in order to make a bigger impact.
Luckily, Google was brilliantly efficient in identifying the scam and preventing it from spreading in less than an hour. Even though Google said that only 0.1 percent of their customers might have been affected, that’s roughly 1 million users in less than an hour.
What really got my attention was that social media users were more focused on the details of the scam and how to minimize the risk in case you were affected by this Google Docs scam – but not really emphasizing how to avoid future similar scam efforts.
Let’s quickly review the definitions of phishing and spear phishing:
Phishing is the practice of sending out emails that purport to be from a well-known source, such as a major bank or utility provider. Spear phishing is a more targeted version of phishing. Emails will address you by name and may appear to come from someone senior within your organization.
In most cases of phishing or spear phishing, an email asks you to provide your credit card and PIN, social security number, and passwords in order to verify you. However, banks and service providers will never ask for such details – at least, they’re not supposed to.
How do cyber attacks work?
This is how targeted hacking attacks work:
Are people cybersecurity’s weakest link?
A 2016 investigation of user behavior from Friedrich-Alexander University (FAU) researchers reported that almost every other person, or up to 56 percent of email recipients and 40 percent of Facebook users, would click on a link from an unknown sender. We’ve heard many times that “people are cybersecurity’s weakest link,” and it all starts from human curiosity or simple negligence.
What would the results be if you conducted a similar impact analysis or assessment within your organization? Do you have a way to automate privacy and security by design? Do you conduct periodic user awareness programs in order to avoid or limit incidents such as phishing?
As a former Chief Information Security Officer, I can definitely say that user awareness programs and periodic impact assessments help organizations to see the gaps or the missing links between people, processes, and technology.
Practical Tips for Protecting Yourself
No strategy is bulletproof, but here are a few basic tips for you and your users to follow to avoid becoming victim to one of these nasty attacks:
- At all costs, avoid clicking on links within emails that are from unverified sources, request that you log in, provide sensitive information, or request permissions and access to applications and other sensitive data.
- Always open a new tab or browser, visit the online service, and log in manually
- If you receive an email from a trusted source like a family member, reach out to them directly and verify if they sent it.
- It’s not only important to protect yourself. This could be an important step in making someone else aware that they’ve been compromised.
- Use multi-factor authentication (MFA) wherever possible. Even if your username and password are compromised, that additional verification layer may save you the headache and give you enough time to recover and reset your account. Most services now offer some form of MFA, whether it’s a security code sent to your phone, an app running on your device, or a token generator.
- Use strong and complicated answers to security questions that are sometimes essential to recovering your account. For questions like “What is the name of your pet?” be very careful that information like this is not publicly accessible on private or professional social networks. It can be very easy to get educational information via LinkedIn or learn about your pets and loved ones via Facebook and Instagram.
- One recommendation is to use an obscure and complex password or phrase completely unrelated to the security question. Your dog or cat won’t take it personally that your favorite pet is “1Fjiowprio34$.”
Simplify User Training with Privacy Impact Assessments
The International Association of Privacy Professionals (IAPP) distributes the AvePoint Privacy Impact Assessment (APIA), a free solution that helps organizations understand and automate the process of evaluating, assessing, and reporting on the privacy implications of their enterprise IT systems. With APIA, organizations can conduct Privacy Impact Assessments (PIAs) and introduce privacy and security by design, but also utilize the built-in workflow engine and form-based survey system with configurable questions to simplify training or deploy user awareness programs.
Organizations can benefit from APIA’s flexibility to extend beyond just automating PIAs. It can also help Information Security Officers or Human Resources Managers to get even more value by allowing them to implement survey-based questionnaires to avoid incidents or a phishing scam. In the case of the Google Doc phishing scam, it took Google less than an hour to quarantine, but not every organization has Google’s engineers or expertise – or even the technology required to react and prevent future incidents in less than an hour.
If your organization has a similar incident, what would be the time to react and prevent this type of an incident? Do your employees know how to spot and prevent a phishing scam? Do you invest in training your employees to prevent your business or customer data falling in the wrong hands? Download APIA today to get started. | <urn:uuid:6ba08417-5941-463b-b65e-2970de50b473> | CC-MAIN-2022-40 | https://www.avepoint.com/blog/protect/avoid-latest-google-docs-phishing-scam | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00505.warc.gz | en | 0.942348 | 1,223 | 2.734375 | 3 |
Ransomware is malware that infects computer systems through various means like phishing and visiting malicious websites. A ransomware attack encrypts the files in the victims’ computer and demands a ransom to get the data back. There are two types of ransomware: Crypto ransomware, which encrypts files, and Locker ransomware, which locks the victims’ device.
Ransomware attacks are one of the major cyber security threats worldwide, increasing yearly. In 2021, the number of ransomware incidents increased by 62%. Furthermore, the ransomware-as-a-service, or RaaS, has doubled after the Covid-19 pandemic. Ransomware attacks can not only incur substantial financial losses and damage the reputation of the company.
Therefore, protection against ransomware is critical for any organization exposed to the internet. A ransomware solution can provide you with many benefits. For instance, it can identify ransomware attacks in real-time, prevent any potential attacks, and clean out the existing ransomware. Nonetheless, there are a few things to consider if you are looking for a proper ransomware protection solution, such as its scanning options, features, pricing, and detection rate. This article reviews the top ten ransomware solutions.
3 Examples of Recent Ransomware Attacks
So far, the world has seen a large number of ransomware attacks. The following are the three major high-profile ransomware attacks and their consequences.
1. The Netwalker attack on The University of California at San Francisco
The University of California at San Francisco (UCSF) was attacked by the ransomware group Netwalker on June 3, 2020. They encrypted its COVID-19 research-related files. While the initial ransom demand was $3 million, the university negotiated it to $1.14 million.
2. Colonial Pipeline
The Colonial Pipeline ransomware attack in the US had to shut down its largest pipelines, causing gas shortages and widespread panic. The attack happened due to the lack of multi-factor authentication (MFA) in its legacy VPN profile. Colonial Pipeline had to pay a $5 million ransom to continue its operations and reduce the gas shortage.
MediaMarkt, the electronics retailer in Europe, was attacked by Hive ransomware in early November 2021. It encrypted more than 3100 servers and workstations, causing the company to shut down its IT systems to prevent the further spread of ransomware. MediaMarket could not accept credit cards or print receipts through their cash registers as a result of this attack.
10 Ransomware Protection Solutions for Enterprises
There are many ransomware protection solutions on the market. This section reviews the top ten ransomware protection solutions with their features, pricing, and downsides.
1. Sentinel One
SentinelOne Singularity Platform is an AI-based all-in-one security solution that protects against ransomware attacks, endpoints, IoT devices, cloud workloads, containers, etc. Its AI engine can detect ransomware and malicious behavior in real-time. Moreover, the embedded AI Threat Intelligence and Threat Indicators help remediate security attacks. The solution has a lightweight installation and an intuitive interface.
Sentinel One is best suited if you have a wide range of devices, endpoints, and networks to protect against ransomware. However, one drawback of this product is its high false-positive rate, which can mark even legitimate applications as suspicious.
SentinelOne offers three packages for the Singularity Platform.
- SIngularity Core : $6 per agent/month
- SIngularity Control: $8 per agent/month
- SIngularity Complete: $12 per agent/month
Packed with many features, Cynet offers comprehensive multi-layered protection against ransomware. Its layered protection components provide multiple, coordinated detections across endpoints, networks, and users. Additionally, it can find memory strings with ransomware protection against even unknown ransomware. Their cybersecurity research and expert team provide 24/7 support, monitor the environment, and identify potential ransomware threats.
A few issues with the solution have been reported so far. For instance, problems with memory utilization, no support for mobile platforms, issues with reporting, etc. However, overall, the product is suited for anyone who needs multilayered ransomware protection.
3. Acronis Ransomware Protection
Acronis provides advanced ransomware protection. Its robust self-defense mechanism prevents criminals from disrupting applications and backup file content. It saves the master boot record of Windows computers by actively monitoring it and preventing any illegitimate changes by malware. Furthermore, it can detect new threats based on existing threats and adjusts the false-positive rates. Acronis Active Protection allows you to specify which programs can perform specific tasks, preventing unauthorized actions. However, Acronis ransomware protection only supports Windows and cannot block all malware.
The most effective way to prevent ransomware attacks is to be aware of how ransomware can be injected into your systems and how to prevent them. CybeReady is not a typical solution for ransomware protection. Yet, it helps companies manage their defenses by training staff and making them aware of cyber security threats. This fully-automated cybersecurity awareness solution not only makes the training easier but is also an engaging experience for employees. You can use it to simulate phishing, one of the most common ways of spreading ransomware. The training sessions can be personalized according to the role, education, and performance.
The CybeReady security training platform is based on data science. It guarantees to change employee behavior, decrease high-risk groups, and increase employee resilience scores within a year.
The Cybereason Defense Platform automatically detects a malware, including ransomware, across all endpoints within the network. When it detects a malware, Cybereason can immediately prevent the lateral spread of the attack within the network. The Cybereason Incident Responders have the full visibility of the devices that become compromised. There is minimal impact on the organization while it is deploying.
Nonetheless, this solution has a few drawbacks, such as the lack of reporting features and proper technical support. Cybereason comes in four packages: professional, business, enterprise, and ultimate. All of them include anti-ransomware.
Trend Micro is also a robust AI-based antivirus software that identifies and blocks ransomware attacks. It provides fast malware scans. Furthermore, its Folder Shield protects files by preventing unauthorized programs from changing them. Trend Micro offers four layers of protection against ransomware; endpoint, email and web, network, and workload. It provides advanced email security to prevent phishing emails and combines different features like machine learning, sandboxing, threat intelligence, etc., for malware prevention, including ransomware. Its disadvantages include the lack of a VPN, the possibility to overload the system when scanning, and being less configurable.
Trend Micro ANtivirus offers three packages.
- Maximum Security: $49 per year
- Internet Security: $39.5 per year
- Antivirus+Security: $19.95 year
7. Check Point Zone Alarm
ZoneAlarm Anti-Ransomware can detect ransomware attacks that other solutions can’t detect. It detects ransomware, stops them, and immediately restores the encrypted data. This solution is a result of years of research and development and it provides complete protection against ransomware attacks. ZoneAlarm also offers a powerful anti-phishing feature in its personal and business anti-ransomware packages, including technologies like machine-learning and heuristic.
Despite its intuitive features, there are also a few reported drawbacks, such as the inability to detect some simulated ransomware and its high cost. The cost of Zone Alarm Anti-Ransomware changes according to the number of devices you need to protect. It costs $25.95 for one per year to protect one PC.
Datto is a ransomware solution with many different features and components from other ransomware solutions. Datto remote monitoring & management (RMM) consists of native ransomware detection that prevents crypto-ransomware by actively monitoring them. The Datto RMM combined with Autotask PSA enables high-priority threats to be escalated first. Datto SIRIS can protect all infrastructure, including cloud infrastructure, providing rapid recovery. It can also find ransomware within data backups. Datto SaaS helps you protect cloud-based applications from ransomware, also offering speedy data recovery. However, configuring these modules can sometimes be tricky, and it can be difficult to do customizations.
Veritas is another multi-layered ransomware protection solution that integrates ransomware detection, protection, backup, and recovery options. Its automated and on-demand malware scanner can detect and block malware across all your devices. It can also detect anomalies across the system, automatically detecting unusual data and user activity like unusual write activity, file extensions, and access activity. Veritas also provides immutables and indelible storage options to improve the resiliency of your data across your infrastructure.
Malwarebytes is a popular ransomware detection and prevention solution with intelligent features. It has an automated approach to improving endpoint security by automatically remediating malware attacks, saving valuable resources and time. Its EDR solution is a cloud-based solution that can scale well and offers consolidated threat incident response across all your endpoints. However, some users have reported that it can be resource-intensive.
Prevention is the best form of protection against ransomware cyberattacks
Ransomware is one of the most prevalent cybersecurity threats worldwide, leading to severe financial losses for organizations. Research reveals that ransomware keeps evolving daily, emerging new types of ransomware every year. Therefore, organizations that consider cyber security a top priority must install a proper ransomware protection solution to safeguard their systems and data. This article has described the ten most prominent ransomware solutions in the market. However, the most effective solution to ransomware is providing necessary training for your company’s staff. The reason is that ransomware attacks occur due to human errors caused by a lack of awareness. That is why companies should invest in ransomware awareness solutions like CybeReady. CybeReady helps cultivate an excellent cyber security culture in your organization and significantly reduces the risks of ransomware attacks.
Do you want to protect your system against ransomware attacks? Then head over to CybeReady and request a demo on how you can achieve it using the CybeReady awareness program. | <urn:uuid:84516adf-f5ad-49cd-96d1-a692d8bf1b57> | CC-MAIN-2022-40 | https://cybeready.com/10-ransomware-protection-solutions-for-enterprises | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00505.warc.gz | en | 0.926399 | 2,141 | 2.609375 | 3 |
Privacy v/s the Economy - There Can Be Only One
Updated: Apr 28, 2020
There are growing privacy concerns in the United States, Europe, and other regions around the world as it relates to the personal privacy safeguards in the current digital ecosystem of search, social media, and cloud providers. This concern has manifested itself in multiple recent high profile events to include:
1) The Facebook privacy concern with Cambridge Analytica where over 50 million profiles were provided to a third party to allow them to predict and influence an election
2) The passing of the General Data Protection Regulation (GDPR) in Europe in May 2018 which seeks to fundamentally change the way data is handled across nearly every sector with major fines for non-compliance (i.e. British Airways was fined $230M and Marriott was fined $124M).
3) GDPR's influence has spread with California and Japan taking up similar legislation around privacy.
4) The Federal Trade Commission (FTC) recently imposed a record $5 billion fine against Facebook for violating consumer privacy.
The net effect of this is we are seeing a backlash against large scale companies for failure to maintain consumer privacy and a resulting need for regulators to pass new laws and levy new fines to incentivize more acceptable behavior. On the face of it, this makes perfect sense. In general, people value their privacy and many are willing to make reasonable trades to protect it. For example:
1) Users would like to understand what information is being tracked and "opt out" if they so choose
2) They may be willing to pay a modest fee for an otherwise free service in order to limit the data collected about them
3) They may be willing to trade some level of convenience by limiting the effectiveness of ads and suggested products/services in exchange for improved privacy
All of these trades fundamentally come down to short term convenience and consumer choice. In that context, they are all fair and reasonable expectations. In the short term, they can and will influence companies in these regulated countries to improve their privacy programs and offer more choice for consumers. This approach will make privacy stakeholders quite happy and it should incrementally move the privacy needle forward. However, the reality is that there are not many consumers voting with their wallet on privacy. How many people do you know who have:
1) Dropped their Facebook account after the Cambridge Analytica incident?
2) Left their Marriott points behind and switched to Hilton after their GDPR fine?
The answer is not very many; a shockingly low number. The reality is that unless privacy starts to effect consumer buying decisions, many of these companies just won't take their privacy as seriously as they should. So what should we do about it? I am going to argue absolutely nothing...
I realize this is a controversial position for someone with a strong cyber security and data privacy background. However, I also believe there are a set of darker and deeper truths lurking that are not being fully acknowledged. In particular, I believe there are two fundamental truths that should be the guiding principles in how we think about privacy:
1) Privacy is an illusion and the ability to maintain it over time will steadily decrease
2) Allowing companies to store and process as much data as possible is significantly more important than our privacy
Let's start with the privacy illusion. The fact is most companies know a lot more about you than you think. For even the most paranoid people who safeguard their privacy religiously, their data is harvested from multiple public sources, purchasing records, and internet history to build a remarkably accurate profile of their demographics, what interests they have, and what they are likely to buy. In general, this is a good thing. It makes your consumer experience more efficient, your costs are lower since companies pay less to acquire their consumers, and company profits are higher. The younger generation takes this for granted having grown up in a digital economy with a large number of convenience services and easy ways to connect and share with people online. The fact is that the large majority are perfectly willing to trade some level of privacy for the convenience these services offer.
In addition, it will become increasingly difficult to maintain your privacy over time. While GDPR will help on issues like browser cookies, it will be impossible to combat the coming wave of machine vision. Everywhere you go you now leave a digital footprint behind. Cameras are ubiquitous and over time will become intelligent. They will know it is you, what brands you are wearing, where you were located during different periods of the day, and feed that data to algorithms that can better target goods and services to your specific needs. Machine vision is inevitable and progress towards improving the algorithms is moving at an exponential pace. Short of a Harry Potter invisibility cloak, how do you protect your privacy in a world of machine vision?
This brings us to the last and most important point. We have to find a way to allow our US companies to securely store and process as much data as possible. I would argue that this should be a top level national goal of US policy. While the current privacy debate has focused on privacy versus convenience, I believe that to be a false choice and a very short sighted view. The reality is that we are in an exceptionally dangerous time where the true choice is privacy v/s the economy where we are in a Highlander situation - there can be only one!
The rise of Artificial Intelligence (AI) is occurring so much faster than anyone realizes and progress is accelerating on an exponential curve (to better understand this problem, I highly recommend Thomas Friedman's book - "Thank You for Being Late - An Optimist's Guide to Thriving in the Age of Accelerations"). While nobody knows exactly when, it is widely acknowledged that between 20-50% of jobs can ultimately be replaced by AI. This could happen in 10 years or in 50 years (if I was a betting man, I would bet on sooner rather than later). Regardless of when it happens, it is inevitable that it will happen. And once it does, the country that controls the AI wins. Since most of the jobs will go away, the wealth will accrue to the country that dominates in AI (which will allow them to subsidize their job losses with things like Universal Basic Income). If we don't win the AI arms race, it will absolutely devastate our economy and shift the balance of power in the world to the AI winner.
There are only two real players in this game: China and the US. In general, they have the most computing power, intellectual bandwidth to throw at the problem, and economic ability to invest. So what is most likely to decide the game? It will all come down to data. There has been little in the way of fundamental new algorithmic breakthroughs in the last few decades. Most of the progress has been applying older algorithms but with almost unlimited processing power and huge amounts of data. With the hyper scale cloud, almost anyone can gain access to unlimited computing power. The real differentiator then will be access to data. Who then has the advantage?
China does....and it isn't even close. What advantages do they hold? Lets list a few:
1) They have over 4x the population of the United States meaning they have 4x the personal data they can collect.
2) They have no expectation of privacy meaning their government can collect any data they want, at any time they want, and in any way they want. In contrast, the US, Europe, Japan, and other large/democratic economies are moving towards increased privacy laws and less data collection.
3) In 2019, China passed the US in retail sales as the largest market in the world while also having a more digital ecosystem for buying goods and services (less reliance on cash transactions meaning more data collected).
What this means is that China is best positioned to strategically win the AI race. The currently dominant non-China economies are making short term bets on privacy in response to consumer sentiment. However, I do not believe that consumers understand the choice they are making. I also don't believe that most consumers actually care that much about their privacy. What if the choice was framed as, "would you be willing to improve the privacy of your data in exchange for a 20-50% chance you will not be able to find a job in the next 10-15 years?" I don't know many people who would make that bet on privacy; not when it is privacy versus the economy and there can be only one....
Interested in learning more about big data and privacy, Contact Us today to learn how to unlock your data and thrive in the next-generation digital economy. | <urn:uuid:0df52adb-60df-4b09-b2fc-16bb8ecfb560> | CC-MAIN-2022-40 | https://www.c2labs.com/post/privacy-v-s-the-economy-there-can-be-only-one | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00505.warc.gz | en | 0.962116 | 1,755 | 2.515625 | 3 |
The term “big data” has been bandied about for a number of years now, to the point where it has been used so much that it is a part of IT culture. Hard to specifically define, yet everyone seems to have a good idea what is meant by it, big data is here to stay. And that is a good thing!
I typically define big data as being the result of a confluence of trends that coincided at the same time. Incessant data growth, alternative data storage and management systems (Hadoop, NoSQL), improved analytical tools, AI and machine learning, cloud computing, social media, sensor-based data, and mobile computing have all contributed to what we refer to as big data.
Moreover, big data refers to the shift from not just disk-based data storage, but also in-memory storage and processing. It is not just relational, but also NoSQL—and not just DBMS but also Hadoop and Spark. It is not just commercial software but also open source. Not just on-premises data and computing, but also in the cloud. Take note that these shifts are not resulting in the replacement of technology and capabilities but in the addition of it. Relational databases are not outdated or obsolete but should be a core component of your multiple data platform strategy.
Furthermore, it is not just the technology we use but how we are using it and what we are doing with it that is shifting. Big data is a result of the transition from mostly internal data to information from multiple sources; from transactional to add analytical data; from structured to add unstructured data; from persistent data to add data that is constantly on the move.
I’m sure you will recall the analyst definition of big data as consisting of four V’s: Volume, Velocity, Variety, and Variability. Although interesting, and a noble attempt at defining something so all-encompassing as big data, I don’t think it matters much.
Other analysts had denigrated the term big data altogether, saying that it is not about the volume of data so much as what we are doing with it. Well, sure, but that has always been the case.
To me, big data is so simple it needs no definition. It is similar to saying big dog because you immediately know what I’m talking about. Big data is all about a lot of data. Big data doesn’t have to be NoSQL. And, you don’t have to sit there counting up your V’s to see if you’re doing it. Real-time analytics on large relational data warehouses qualifies as big data to me. And, it should to you, too. Our heritage transactional systems are generating a large amount of data that is the most interesting for large enterprises to process in their big data analytics systems.
The point I’m making here is given in the title of this piece: It is all big data! And that is the way you should be thinking. How can we better store, manage, integrate, administer, analyze, and ingest all of our data to make better business decisions? How can we augment our data with partner data, social media data, and other sources of relevant data? What tools will help us do that?
If you are a DBA, then all of the management and administrative tools that you use or need to manage databases at your organization are big data tools. By adjusting the way you think about your requirements, you can focus your budget requests to hit that “big data budget” and perhaps finally get those performance or recovery tools that you’ve needed for years. The amount of data DBAs are managing is growing at many times the rate at which the number of DBAs is growing, so management and automation tools will be imperative to succeed.
Although I’m usually skeptical of industry trends, this one is different. Many recent IT trends have been process-orientated (e.g., object-oriented programming, web services, SOA), but I believe that data is more important than code. As I’ve stated before, applications are temporary, but data is forever! And if the big data trend helps us better protect, administer, and use our data, then I’m all in favor of it.
We can use the rise of big data to the forefront of computing as a means to improve data quality, institute data governance, and pay more attention to our data management infrastructure. After all, if you’re going to have big data, it had better be good big data. Big data forever! | <urn:uuid:e4d603ec-8765-448e-854f-f6e4fa5aa459> | CC-MAIN-2022-40 | https://www.dbta.com/Columns/DBA-Corner/Its-All-Big-Data-123022.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00505.warc.gz | en | 0.954138 | 955 | 2.5625 | 3 |
ISO 27001 Prevents Cyberattacks
Implementing an ISO 27001 Information Security Management System (ISMS) prevents cyberattacks. The Ponemon Institute in a 2017 study found that a typical firm experiences 130 security breaches each year.
Mitigating these breaches requires more than advanced IT practices, it requires a dedicated management system. ISO/IEC 27001 is such a system. It includes processes for human resource security, physical and environmental security, and dealing with information security incidents.
The Real Cost of Cyber Attacks
The Cost of Malicious Cyber Activity to the U.S. Economy , released by the Whitehouse in February of 2018, estimates that such attacks cost the U.S. economy between $57 billion and $109 billion in 2016. In 2021 an insurance company paid out $40 million in ransom. However, these attacks can inflict damage that is difficult to assess or quantify in dollar amounts. While most incidents are kept out of the public eye, a few attacks, like the Colonial Pipeline attack in May of 2021, do make headlines.
What is ISO 27001?
ISO 27001 is an international standard and widely accepted Information Security Management System. The role of an ISMS is to preserve confidentiality, integrity and availability of information. It accomplishes this task by applying risk management processes. An effectively tailored program can meet this challenge because it is part of the organization’s processes and management structure.
Implementation of an effective ISMS requires an assessment of the organization’s objectives, security requirements, and organizational processes. These assessments include a consideration of the size and structure of the organization so that the ISMS is scaled to meet the needs of the organization.
Once these influencing factors have been defined a risk assessment can be conducted. This process should:
- identify the information security risks
- identify the risk owners
- assess the potential consequences of an undesired occurrance
assess the realistic likelihood of the occurrence
- determine the levels of risk
- establish priorities for treatment of the risk (e.g. implementation of information security controls)
The Advantage of Implementing an ISMS
Because ISO 27001 is configurable to your company’s requirements it is an effective means of organizing data security. This is because it includes a complete process and involvement of all stakeholders in monitoring and preventing cyberattacks. ISO 27001 also includes training to maintain a high state of awareness for all employees.
An ISMS can readily address numerous issues because centers it around policies and processes that are adopted from top management down and includes all stakeholders including third parties.
As and example, a continual challenge of organizations is to ensure that software is up to date. However, this can be a challenge in organizations because of segregation of tiers and organizational turf battles. With an effective ISMS these issues are identified and dealt with at a management level and communicated through policies, procedures, and work instructions. Additionally, because metrics are established for criteria, monitored, and analyzed, deficiency in processes can be identified and remedied.
The security of data is not only of great concern to your organization. It is of interest to your customers, investors, and partners. ISO IEC 27001 certification shows that your company is a responsible partner and maintains an active interest in monitoring and mitigating cyberattacks.
CVG Strategy Cyber Security Consulting and Training
Cyber Security Consulting
CVG consultants have over a decade of experience with ISMS, Quality Management Systems (QMS) and Export Compliance. We understand that each business has a unique set of requirements that demand tailored solutions. Developing these solutions assessing an organization’s culture and involving all stakeholders. Using this information, we can develop programs that are effective and can adapt as a business grows. | <urn:uuid:32ce14a7-54e0-468f-bf04-820e09cb3a85> | CC-MAIN-2022-40 | https://cvgstrategy.com/iso-27001-prevents-cyberattacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00505.warc.gz | en | 0.936694 | 756 | 2.796875 | 3 |
Dealing With The X-Ray Films
Warning — This is the page where it will
be most obvious that I am an electrical engineer
and not a radiologist or a dentist.
There are two main X-ray film types to deal with — a small intraoral ("inside the mouth") film, 40x30 mm. and a large extraoral ("outside the mouth") film, 280x125 mm. The films are used in various ways. The intraoral film might be used for a periapical ("around the (root) tip") view capturing the root tip, or a "bitewing" view capturing the crowns and the alveolar bone and its loss. The relatively small film is held inside the mouth and the X-ray source positioned outside, the difference between periapical and bitewing coming down to film placement and X-ray illumination direction. "Bitewing" films are named for the wing-shaped tab the patient bites to hold the film in place.
However, as far as scanning goes, periapical and bitewing have the same characteristics of density average and variation. There is a great deal of variation within the set of periapical films, and a great deal of variation within the set of bitewing films. Any systematic difference between the categories of periapical and bitewing films is overwhelmed by the huge variation within those categories.
The much larger 280x125 mm film is used for panoramic imaging. The patient typically sits in a special chair with their chin in a cup-like holder. An arm holds the X-ray source on one side of their head, and on the opposite site, the large film moves behind a slit in a shielded panel. The arm swings around their head, and in some system designs, the chair may translate slightly to the side at the half-way point.
The result is a panoramic view of the entire mouth and surrounding structures. Depending on the system design, there may be a blank region in the center where the system avoided attempting to image through the spinal column. That part of the film is unexposed, so it's clear. It looks "washed out" to me, especially since that central unexposed area has an indistinct boundary.
Notice how the plastic sleeve design varies. The one at lower left in the first picture below would be the best one for scanning purposes, as it keeps the individual films relatively close together. The background outside the sleeve could easily be masked off and give us a scan with a relatively small fraction of bare backlit background.
But then look at the sleeves at upper left and lower right and see how they leave more space between the films. Those films are slightly misaligned with respect to each other.
I discovered a number of data collection rules:
All 40x30 mm films must be left within their sleeves. The archival information (name, date) is written on the sleeves. Films spread out on a table are easily mixed and confused. There is nothing on a film itself to identify it, unless you find some features identical to those on another film still safely nestled in its labeled sleeve.
Films cannot be realigned within their sleeves. Remember the older sleeve that had started to react with its films. Also see the staining on the sleeve at left. The stain pattern corresponds to the fully saturated areas of the film. It is clearly a chemical reaction between the film and the plastic sleeve.
It is not practical to create detailed masking for individual films or sleeves. I made some test scans of sleeves where I had cut black plastic electrical tape into narrow strips, blocking almost all the surface except that of the films themselves. That produced very high quality data, but it is much too labor intensive. Unless you are just scanning a few films, you really have to deal with the film sleeves as complete units.
The sleeves come in rolls, two pockets high. You can cut or tear them off at any length. 1x2, 2x2, 3x2, and 4x2 sleeves are all common. Some sleeves hold an odd number of films, meaning an empty pocket and a large increase in the bare background proportion of the scanned image.
The solution to the masking problem is to cut thin cardboard into useful rectangular shapes. I used a cereal box. You will need a few pieces roughly the size of one film pocket to block those empty pockets. Other pieces of varying size will be used to block the regions of the glass not used when you scan a collection of films.
In order to keep track of which scanned images correspond to which patients, you may find that this problem partly solves itself. The labels on the sleeves, either paper stickers, slips of thin cardstock, or plastic, tend to be translucent enough at the light levels used that writing on them is legible in the scanned image. You can also write with a Sharpie type pen on a slip of paper and place that on the scanner with a collection of films. The paper will be translucent enough that the thick dark Sharpie writing is easily read.
What about film grain size, or resolution?
What scanner settings should be used to capture all available information without wasting collection time and storage space? What is the grain size of the film, which will define its resolution or the limit on the size of visible details?
I looked around the Kodak web site, specifically in the dental X-ray film area, but I could find no information.
Then I was given the telephone number for Kodak's dental customer service operation. This was a specialized office dealing just with their dental X-ray film products. They were the people to ask any technical question about Kodak X-ray film.
I asked the Kodak technical representative, "What is the grain size of your DF-58 intraoral X-ray film?
The Kodak technical representative not only didn't know the answer to my question, he didn't even know what film grain size was.
OK, this is surely my fault, I shouldn't have been sloppy and used the term "film grain size". So, I asked about resolution.
Still no clue to the answer or even to the meaning of my question.
"Well, you see, I need to know the size of the smallest visible feature in a developed X-ray film. There will be some lower limit to visible feature size based on the graininess of the film chemistry. What is that? And if you don't know, could I speak to someone who does know?"
According to what I was told by the Kodak technical representative, no one at Kodak knows what the grain size is for their dental X-ray film. They don't even know what "grain size" might be. But, he cheerily added, I should look at a specific web page on the Kodak web site because that is where everything is!
I dutifully wrote down the URL
while fighting the urge to ask the phone drone how that
magical web page got created when no one working for
the company knows the information supposedly stored there.
I looked, and the information on that web page was limited to:
➊ Kodak film is very very good and I should buy some!
➋ Kodak film comes in bright yellow-orange boxes!
➌ Kodak thinks that their film is the best, and I should, too!
Meanwhile the Fuji dental X-ray film pages are cluttered with all sorts of useful information, including film grain size.
Since Kodak was completely unhelpful, I managed to estimate on my own that their DF-58 intraoral film has a grain size of around 0.085-0.13 millimeters. I did this by scanning a representative sampling of films at the finest resolution of the scanner, 1200 ppi or 47.2 points per mm. I placed the films on the scanner's glass plate, and then laid a piece of glass on top of them to prevent any film motion between scans. I then took multiple scans at varying illumination levels.
The images were therefore registered to one another, meaning that the pixel at some row and column in one image should correspond to the same point on the film as the pixels at the same row and column location in the other images. Those pixels should be of the same point on the films, as long as the scanner carriage's motion was exactly the same from one scan to the next. It was not exact but very close, within two or three pixels. I was impressed with the precision of my US$ 7.50 instrument.
The resulting images were quite large, so I extracted subimages from high-detail areas of the different films, typically a small region around the neck of a tooth showing its enamel, its dentin, and some of the surrounding alveolar bone. This region would be no more than about 450 pixels on a side, meaning a region less than 1 centimeter wide on the film. Those were then displayed at twice their size on the screen.
Comparing these subimages by flipping back and forth in the sequence, you can see that there is a slight graininess in areas otherwise of constant medium grey level. And, this graininess is consistent from one image to the next — it is a feature of the film and not sensor noise. I found that the smallest grain features constant from one image to the next were between 4 and 6 pixels across when scanned at 1200 pixels per inch. The visible structural details of the teeth and bones were all larger than this grain size.
4 to 6 pixels at 1200 ppi means that the grains are 1/300 to 1/200 of an inch across, or 0.084667 to 0.127 millimeters. So, scans at 400 ppi (15.75 points per millimeter) should measure at least one point within each film grain, and therefore capture all the information available in the mysterious Kodak DF-58 film.
The large panoramic films can be scanned at lower resolution without noticeably losing information. Lower resolution scans allow the scanner to run faster, although I noticed a non-linear relationship for the time required for a scan of the full scanning area:
|100 ppi||40 seconds|
|200 ppi||60 seconds|
|300 ppi||61 seconds|
|400 ppi||107 seconds|
See Grzegorz Jezierski's on-line collection of X-ray tubes for lots of pictures and the history behind the devices.
Conventional photographers might wonder why this film is so large grained. It's to reduce the needed X-ray exposure.
As films have gotten faster (and larger grained), the X-ray exposures have gotten shorter and shorter.
No more extra high dose imaging as with the systems at the Walter Reed Army Hospital's museum of medical history, where you can also see iron lungs, Abe Lincoln's skull fragments, and more.
What about film density?
The density of the film, the measure of how much incident light will pass through a point on the film, is certainly important. If I were designing a complete system from scratch, I would want to take this into account. However, since I am cobbling this together from an existing low-cost scanner, there doesn't seem to be much I can really do about this. The AGC (automatic gain control) effects of the scanner hardware and the scanner driving software will hide the true film density from me, in an attempt to make the original scanner design relatively easy to use. A later page discusses light box and software settings to deal with varying film density.
Thanks to a suggestion from someone who read these pages, you might want to see the Non-Destructive Testing / Non-Destructive Evaluation Center's educational material on radiographic testing, and especially their page on radiographic film density.
Now you're ready for the next step: learning how to use the XSane scanning software | <urn:uuid:82bdf677-4045-429e-9d7f-076f4c774c27> | CC-MAIN-2022-40 | https://cromwell-intl.com/3d/xray/film.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00505.warc.gz | en | 0.954676 | 2,472 | 2.640625 | 3 |
B.1.620 Lineage: According to a new multinational research led by researchers from Lithuania, a new SARS-CoV-2 Lineage called B.1.620 has emerged from most probably Cameroon in Central Africa and has spreading silently across the globe silently while the world was more focused on the Delta variant and its emerging sub-variants.
This new immune evasive variant B.1.620 not only exhibits the spike protein mutation E484K but also carries a large number of unique mutations.
In fact, this new SARS-CoV-2 strain is 18 mutations away from nearest relatives and 26 from the reference strain Wuhan-Hu-1.
This worrisome B.1.620 variant shares multiple mutations and deletions with known SARS-CoV-2 variants of concern (VOCs) including HV69/70Δ, LLA241/243Δ, S477N, E484K, and P681H. However, it does not appear to be of recombinant origin.
This new lineage now includes genomes from France, Switzerland, Belgium, Germany, England, Scotland, Italy, Spain, Czechia, Norway, Sweden, Ireland, Portugal, the United States, Canada, and, most recently, the Philippines and South Korea. Initial B.1.620 European cases included travelers returning from Cameroon; however, more recently, genomes are also being submitted to GISAID from the Central African Republic, Equatorial Guinea, the Democratic Republic of the Congo, Gabon, and the Republic of Congo.
It is currently causing a COVID-19 resurgence in Lithuania with worrying more critical outcomes.
In the study abstract, the researchers commented, “Distinct SARS-CoV-2 lineages, discovered through various genomic surveillance initiatives, have emerged during the pandemic following unprecedented reductions in worldwide human mobility. We here describe a SARS-CoV-2 lineage – designated B.1.620, discovered in Lithuania and carrying many mutations and deletions in the spike protein shared with widespread variants of concern (VOCs), including E484K, S477N and deletions HV69Δ, Y144Δ, and LLA241/243Δ.
As well as documenting the suite of mutations this lineage carries, we also describe its potential to be resistant to neutralizing antibodies, accompanying travel histories for a subset of European cases, evidence of local B.1.620 transmission in Europe with a focus on Lithuania, and significance of its prevalence in Central Africa owing to recent genome sequencing efforts there. We make a case for its likely Central African origin using advanced phylogeographic inference methodologies incorporating recorded travel histories of infected travellers.”
The study findings were published in the peer reviewed journal: Nature.
The WHO and U.S.CDC has been notified of the recent developments and the B.1.620 is likely to be upgraded to a VOC (Variant of concern) status in coming days.
Over a year into the pandemic and with an unprecedented reduction in human mobility worldwide, distinct SARS-CoV-2 lineages have arisen in multiple geographic areas around the world1,2,3. New lineages are constantly appearing (and disappearing) all over the world and may be designated variant under investigation (VUI) if considered to have concerning epidemiological, immunological or pathogenic properties.
So far, four lineages (i.e. B.1.1.7, B.1.351, P.1 and B.1.617.2 according to the Pango SARS-CoV-2 lineage nomenclature4,5) have been universally categorised as variants of concern (VOCs), due to evidence of increased transmissibility, disease severity and/or possible reduced vaccine efficacy. An even broader category termed variant of interest (VOI) encompasses lineages that are suspected to have an altered phenotype implied by their mutation profile.
In some cases, a lineage may rise to high frequency in one location and seed others in its vicinity, such as lineage B.1.177 that became prevalent in Spain and was later spread across the rest of Europe2. In others, reductions in human mobility, insufficient surveillance and passage of time allowed lineages to emerge and rise to high frequency in certain areas, as has happened with lineage A.23.1 in Uganda6, a pattern reminiscent of holdover H1N1 lineages discovered in West Africa years after the 2009 pandemic7.
In the absence of routine genomic surveillance at their origin location, diverged lineages may still be observed as travel cases or transmission chains sparked by such in countries that do have sequencing programmes in place. A unique SARS-CoV-2 variant found in Iran early in the pandemic was characterised in this way8, and recently travellers returning from Tanzania were found to be infected with a lineage bearing multiple amino acid changes of concern9.
As more countries launch their own SARS-CoV-2 sequencing programmes, introduced strains are easier to detect since they tend to be atypical of a host country’s endemic SARS-CoV-2 diversity, particularly so when introduced lineages have accumulated genetic diversity not observed previously, a phenomenon that is characterised by long branches in phylogenetic trees.
In Rwanda, this was exemplified by the detection of lineage B.1.3806, which was characteristic of Rwandan and Ugandan epidemics at the time. The same sequencing programme was then perfectly positioned to observe a sweep where B.1.380 was replaced by lineage A.23.16, which was first detected in Uganda10, and to detect the country’s first cases of B.1.1.7 and B.1.351. Similarly, sequencing programmes in Europe were witness to the rapid displacement of pan-European and endemic lineages with VOCs, primarily B.1.1.7 (e.g. Lyngse et al.11).
Given the appearance of VOCs towards the end of 2020 and the continued detection of previously unobserved SARS-CoV-2 diversity, it stands to reason that more variants of interest (VOIs), and perhaps even VOCs, can and likely do circulate in areas of the world where access to genome sequencing is not available nor provided as a service by international organisations.
Lineage A.23.110 from Uganda and a provisionally designated variant of interest A.VOI.V29 from Tanzania might represent the first detections of a much more diverse pool of variants circulating in Africa. We here describe a similar case in the form of a lineage designated B.1.620 that first caught our attention as a result of what was initially a small outbreak caused by a distinct and diverged lineage previously not detected in Lithuania, bearing multiple VOC-like mutations and deletions, many of which substantially alter the spike protein.
The first samples of B.1.620 in Lithuania were redirected to sequencing because they were flagged by occasional targeted PCR testing for SARS-CoV-2 spike protein mutation E484K repeated on PCR-positive samples. Starting April 2nd 2021, targeted E484K PCR confirmed a growing cluster of cases with this mutation in Anykščiai municipality in Utena county with a total of 43 E484K+ cases out of 81 tested by April 28th (Supplementary Fig. S1).
Up to this point, the Lithuanian genomic surveillance programme had sequenced over 10% of PCR-positive SARS-CoV-2 cases in Lithuania and identified few lineages with E484K circulating in Lithuania. During initial B.1.620 circulation in Lithuania the only other E484K-bearing lineages in Lithuania had been B.1.351 (one isolated case in Kaunas county, and 12 cases from a transmission chain centred in Vilnius county) and B.1.1.318 (one isolated case in Alytus county), none of which had been found in Utena county despite a high epidemic sequencing coverage in Lithuania (Supplementary Fig. S2).
An in-depth search for relatives of this lineage on GISAID12 uncovered a few genomes from Europe initially, though more continue to be found since B.1.620 received its Pango lineage designation which was subsequently integrated into GISAID. This lineage now includes genomes from a number of European countries such as France, Switzerland, Belgium, Germany, England, Scotland, Italy, Spain, Czechia, Norway, Sweden, Ireland, and Portugal, North America: the United States (US) and Canada, and most recently The Philippines and South Korea in Asia.
Interestingly, a considerable proportion of initial European cases turned out to be travellers returning from Cameroon. Since late April 2021, sequencing teams operating in central Africa, primarily working on samples from the Central African Republic, Equatorial Guinea, the Democratic Republic of the Congo, Gabon and lately the Republic of Congo have been submitting B.1.620 genomes to GISAID.
We here describe the mutations and deletions the B.1.620 lineage carries, many of which were previously observed in individual VOCs, but not in combination, and present evidence that this lineage likely originated in central Africa and is likely to circulate in the wider region where its prevalence is expected to be high.
By combining collected travel records from infected patients entering different European countries, and by exploiting this information in a recently developed Bayesian phylogeographic inference methodology13,14, we reconstruct the dispersal of lineage B.1.620 from its inferred origin in the Central African Republic to several of its neighbouring countries, Europe and the US. Finally, we provide a description of local transmission in Lithuania, France, Spain, Italy, and Germany through phylogenetic and phylogeographic analysis, and in Belgium through the collection of travel records. | <urn:uuid:de7dfc02-027d-4e41-a8f9-a039a852668d> | CC-MAIN-2022-40 | https://debuglies.com/2021/10/29/a-new-sars-cov-2-lineage-called-b-1-620-has-emerged-from-most-probably-cameroon-carrying-a-large-number-of-unique-mutations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00505.warc.gz | en | 0.950738 | 2,057 | 2.65625 | 3 |
PFOA, an ingredient in Teflon, is far more prevalent in American drinking water than previously thought.
It takes a lot to convince the US Environmental Protection Agency to limit how much of a toxin can legally show up in America’s drinking water. The threshold for determining probable human harm is very high, and even if harm is detected, the toxin has to show up in enough public water sources with enough frequency, and at levels sufficiently high, before the EPA considers it significant enough to regulate. And even after all the exhaustive studies are done, the decision ultimately comes down to “the sole judgment of the Administrator,” the head of the agency, who may or may not be swayed by the data.
Under the Safe Drinking Water Act, the EPA is responsible for determining when a chemical needs to be regulated in the US water supply, but it hasn’t added a new toxin to its list since 1996. (Even the Government Office of Accountability thinks that’s a sign of a broken system.) In the past two decades, tens of thousands of new chemicals have come onto the market, and plenty of others that pre-date 1996 have been discovered to harm human health.
For example, newly released lab results show perfluorooctanoic acid, or PFOA (an ingredient in Teflon, the chemical used to make non-stick cookware) is far more prevalent in American drinking water than previously thought. Exposure to PFOA has been linked to a range of health risks including cancer, immune system issues, and developmental problems in fetuses.
The EPA’s official estimate is that PFOA is in about 1% of US water supplies. A reanalysis by the same lab that helped the EPA reach that number did a reanalysis of the underlying data and found the real number is more likely in the 20% range. And about 28% of water supplies are contaminated with some member of the perfluorinated compound family.
In 2013, as part of the effort to decide whether to regulate six perfluorochemicals, including PFOA (and perfluorooctane sulfonate or PFOS, a widely used flame retardant), the EPA required every water authority nationwide serving more than 10,000 people to test for the compounds. The EPA hired three labs to perform the tests, including California-based Eurofins Eaton Analytical, which was responsible for about a third of the 36,000 total tests done at the time. Eaton Analytical’s results showed some contamination, but it didn’t look particularly widespread.
But it turns out that was because of the threshold the EPA was using; Back then, the EPA decided only samples that tested positive for 20 parts per trillion (ppt) or higher of PFOA should be counted and only 40 ppt or higher of PFOS, after deciding that was the lowest amount of the chemicals the labs could reliably detect in samples. But Eaton Analytical told the EPA it was able to test for the chemicals at much lower levels—as low as 2.5 ppt.
According to Andrew Eaton, the technical director of Eaton Analytical, the EPA’s PFOA and PFOS thresholds were set so high because the other two labs hired to do the testing in 2013 couldn’t reliably detect the chemicals at as low levels as his lab could. Eaton says those dramatic differences should have made the agency look harder for capable labs. “That should have given the EPA pause to say ‘Hmm, why were there such big differences here?’” Eaton told the Bucks County Courier Times earlier this month. “If you’re not seeing something because you looked too high, you’re not really doing your due diligence,” he said.
The lab recently went back and re-mined its 2013 data, using lower thresholds than the EPA previously said they wanted to hear about. That’s when they found that 20% of the samples contained the toxin, which means the EPA may be vastly underestimating how widespread contamination from this class of toxins really is.
Research also suggests that the toxins harm human health at much lower levels than the EPA threshold. According to David Andrews, a senior scientist at the Environmental Working Group, there is debate in the toxicology community as to whether, like lead, there is actually no safe level of exposure, particularly in children who can accumulate more of it than adults and where some studies have suggested an association with behavioral and developmental problems.
PFOA is not currently regulated by the EPA, so state or local governments aren’t required to test for them. The EPA does set a recommended maximum exposure level for PFOA at 70 parts per trillion. But it’s nonbinding: states can choose to comply or not. In New Jersey, the local environment department has set the “acceptable” level at 14 ppt, the most stringent in the country (there is a lot of PFOA in New Jersey’s drinking water—the Environmental Working Group calculated the EPA’s testing method would have missed 75% of the contamination in that state).
PFOA and PFOS keep turning up in drinking water supplies in US towns and cities. Both are cancer-causing toxins, and in some cases, residents may have been drinking one or the other in their water supply for decades—like in Hoosick Falls, New York, where Teflon was long manufactured, and where a resident found high levels of PFOA in the drinking water. The EPA named the area New York’s newest Superfund site last year. Before that, PFOA was found to be heavily contaminating the groundwater in a cancer-riddled town in West Virginia, home to a large factory where Dupont made Teflon.
Contamination has been so widespread in the past, Andrews says, that “everyone in the US already has these chemicals in our blood. You really don’t want to add more to that.”
After years of debate and a major scientific report connecting PFOA to two cancers and several other serious diseases, the EPA was rumored to start regulating it this year. That hasn’t happened, and the policies of the people currently running the agency don’t bode well for a rule in the future. As the New York Times reports, a scientist who worked for the chemical industry now shapes policy on hazardous chemicals at the EPA. She has moved to change how risks from chemicals are evaluated, requiring the agency to look only at hazards associated with specific “conditions of use” of a chemical, rather than at all hazards posed by all routes of exposure to the chemical regardless of what it was used for. The change makes it harder to evaluate risk—and therefore to regulate—toxins like PFOA.
Eaton told Quartz he gave a presentation about his new conclusions to EPA employees this year. He says they agreed with him that their agency might have missed something. “EPA has seen the presentations—their initial reaction was, ‘Gosh, we set the reporting limits based on what the science told us at the time, and you’re right, we probably should have looked more closely at what the science told us about reporting limits.’”
EPA spokesperson Enesta Jones defended the EPA’s approach to the study, telling the Courier Times this month that the “EPA is aware that some laboratories are able to achieve [lower] reporting limits,” but that the limits were “established so that a national array of laboratories could meet them and were based on looking at the capability of multiple commercial laboratories.”
Jones said the EPA is expected to reach a decision about the toxin in 2021. | <urn:uuid:8f3ec7b1-ff06-4504-8347-0a42c177953a> | CC-MAIN-2022-40 | https://www.nextgov.com/cxo-briefing/2017/11/new-analysis-4-year-old-data-shows-epa-ignoring-lot-toxins-us-drinking-water/142384/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00505.warc.gz | en | 0.970431 | 1,603 | 3.015625 | 3 |
How to Choose the Right Access Layer Switch?
A typical enterprise hierarchical LAN campus network design includes access layer, distribution layer, and the core layer. In each layer, the enterprise switches are categorified, among which the access switch is a key part at which local end-users are allowed into the network. This article will introduce what the access switch is and how to select the right access layer switches for your enterprise network.
What is the function of the access layer?
The access layer, as the lowest layer of the hierarchical internetworking model, is also referred to as the desktop layer. It plays the role of connecting end-users or end nodes such as PCs, printers, wireless access points to the network. The access layer is supposed to facilitate the continuous network connection of the end devices no matter where they are located. In the meantime, the design of access layer must take consideration of the upper layer connections. The access layer must ensure security as the first layer as well as the first line of defense for the network.
What does an Access Layer Switch do?
As the physical entity of the access layer, access switches are responsible to connect both to the distribution layer switches and to the end devices as well as ensure the packets are delivered to the end devices. Besides ensuring the persistent connection of end users as well as the upper distribution and core layers, an access switch is expected to meet the requirement of access layer including simplifying the network management, providing security services and other specific functions according to the different network environments.
Factors to Consider When Choosing Access Layer Switches for Enterprises
When choosing access layer switches, there are many points to consider, such as port density, port speed, security, scalability, deployment and management method, as well as cost. Let's learn them one by one.
Port density refers to the number of ports available on a single switch. An access layer switch should support high port density since it is connected to a large number of end-users and devices. It is essential to consider how many devices are required to connect to the access layer, then decide how many switch port number you will need for the access layer.
The port speed of the access switch is the primary concern to end-users. Most access switches come with 10/100/1000Mbps ports. Whether to use Fast Ethernet or Gigabit Ethernet depends on the specific requirements of your network. Though Fast Ethernet is enough for IP telephony and data traffic on most small to medium enterprise networks, its performance is much lower than Gigabit switches. Moreover, it is vitally important to choose high bandwidth uplink ports in case that the uplink port is oversubscribed when the required bandwidth is greater than the available maximum bandwidth to distribution layer switches. Therefore, you’d better choose the access layer switches with suitable port density and types as needed.
Since the access layer is the network edge, it plays a critical role in defense for security. Access control services such as 802.1x must be supported in access layer switches to secure your LAN. Furthermore, access switches should support the segmentation of traffic through VLANs. IP source guard, DoS protection and other techniques should also be provided to prevent security from attacks.
The number of users in an enterprise network changes over time, therefore it is imperative to think about how many users will the network require in the future. The network design needs to meet the requirement for the enterprise environments in three to five years so that the access switches you choose can provide the opportunity for the network upgrading smoothly as time goes by. Stackable switches such as FS S3910 series stackable switches are good choices for greater scalability in case of future use, because they allows up to four ones stacked to working as one unit, then you will get at most 192 1G rj45 ports for use. Network managers can stack these Gigabit switches anytime to expand the number of connected devices when the network size grows.
Simplified Deployment and Management
In a high-density access network environment, it is important to consider simplifying the network deployment and management of numerous end devices. Accordingly, access switches should be fast to deploy and easy to manage for network administrators. PoE technology is normally provided as an option for simpler deployment to enable access layer switches to supply power to end devices such as wireless APs and security cameras, providing simplified deployment to a large number of devices in the access layer. FS Gigabit PoE+ switches offer easy management and maintenance with many management modes such as Web GUI and CLI supported for more efficient installation and management.
Pricing usually depends on the number of ports. Generally you’re looking at $250-$300 for 20 ports without PoE support. PoE models cost more, but if you’re planning on using your switch for Internet telephony, it is a nice feature to have.
Another factor may influence the cost is the type of the access layer switch. If you want to buy a network switch designed with fiber optic ports rather than rj45 ports, you need to take into consideration the cost of optical modules.
Overall, access switches are supposed to feature with simplicity, reliability, and security. When selecting the access layer switches, the primary step is to assess your business needs, and choose a product that best addresses these specifications. | <urn:uuid:a3289611-c6c7-4fca-9107-5d4dffd0d5f8> | CC-MAIN-2022-40 | https://community.fs.com/blog/how-to-choose-the-right-access-layer-switch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00705.warc.gz | en | 0.928057 | 1,103 | 2.609375 | 3 |
|Start Here for the Tutorials||TSO Applications|
The Integrated Development Environment (IDE) is where you compile, edit and debug all your applications.
This is the first session of the tutorials in the Getting Started book. You need to work through this before you do any of the others. You need to have read the chapter Start Here for the Tutorials before doing this one.
The IDE integrates all the tools you need for editing, compiling and debugging applications written in COBOL and Assembler. It contains extensive online documentation for these tools and for the COBOL language.
Mainframe Express enables you to develop and run your mainframe applications on a PC, but it doesn't emulate a mainframe-type user interface. On the contrary it has an interface of the type common on Windows, using windows, menus, pushbuttons, toolbars, and so on. The IDE is this interface. Such an interface is known as a GUI (graphical user interface). Once you're familiar with this kind of interface, it's very fast, easy and powerful to use.
The easiest way to learn about the IDE is to complete a few simple tasks, so this session takes you through creating a project, compiling it, running it and debugging it - that is, stepping through the source code.
A project is a file detailing all the files in your application, and how they should be compiled. A project is very easy to create, and makes compiling extremely quick and easy. You should create a project for every application, even the simplest. On disk, a project is recognizable by the extension .mvp at the end of its name. However, you never need to look at a project directly because, like most things in Mainframe Express, you create and maintain it entirely using the IDE.
The folder where you keep the project for an application is called its project folder. You could keep the application's other files anywhere, since the project contains pointers to them, but it's usually convenient to keep them all in the project folder.
When you install Mainframe Express, Setup creates the system folders containing Mainframe Express, and a folder called by default d:\mfuser. This contains several folders, of which the most important is projects. This folder d:\mfuser\projects is intended as your work area, and we suggest that you put all your project folders within it.
Mainframe Express includes many demonstration applications. Setup installs them within d:\mfuser\projects, but to avoid having too many folders directly within d:\mfuser\projects, which might be inconvenient when you come to create your own, it puts them one level further down. All demos used by the tutorials in this Getting Started book are in folders within d:\mfuser\projects\gsdemo. More advanced demos - used in tutorials accessed via the online help - are in folders within d:\mfuser\projects\demo.
If, when you installed, you specified a different name for d:\mfuser, remember to use it in these sessions instead of d:\mfuser.
The demo application used in this session is a simple application that you might have downloaded from a mainframe. In this session, you get it working on your PC. It's called Vsamdemo, and it's a sales application that reads a file of sales inquiries and allocates them to sales representatives. It consists of one COBOL program with two copybooks, together with a JCL jobstream. The COBOL source file contains some simple source errors, which you find and fix in this session.
In this session you fix the source errors in one of the source files. If you should want to run the session again, you can restore the source errors by copying vsamdemo.cbl into folder d:\mfuser\projects\gsdemo\idedemo\source from d: \mfuser\projects\gsdemo\idedemo\source\original - see the appendix Windows Tips if you need help.
In this session you:
If you've entered the IDE by clicking the Run button at the end of installing, you may still have visible the main screen of the installation utility - the one with the Install, Installation Notes, and Exit buttons - though it may be hidden behind the IDE. You can close it by clicking its button, now or at any time - it won't interfere with this session
If not, then start the IDE as follows (this is how you will always start the IDE in future):
You may get a dialog box entitled Micro Focus Protection System, warning you that your license expires in a few days. In these sessions we will ignore this warning, but later you should load Mainframe Express again and click Help on this screen for details of how to get a full license.
If you already have a full license, you get a screen telling you what options you have installed. This disappears after a few seconds and is replaced by the IDE
When you first load the IDE you get a Welcome screen as well as the IDE window. It has a check box you can set to show whether you want to get it every time. It enables you to choose whether to look at the Help file, or go straight into Mainframe Express.
The Welcome screen closes, and you see the User ID dialog box. You only get this the first time you load the IDE.
If you ever need to change your user ID, you can get this dialog box by clicking the Options menu, then User ID.
You now have the IDE on your screen. It is shown in Figure 5-1.
Figure 5-1: The Integrated Development Environment (IDE)
The large pane is where various windows such as project windows and editing windows will be opened. The pane below it is the Output window, where messages from the IDE and compiler are displayed. It has two tabs marked Build and Find in Files. When the Build tab is highlighted (that is, white), this window displays messages to show the progress of a build. When Find in Files is highlighted it displays the results of the multi-file search function.
You can change the size and shape of the IDE by dragging the edges and corners. You can detach some panes and position them elsewhere in the IDE or separately on your screen - this is called docking or undocking them. See the appendix Windows Tips if you want to do this.
In doing the sessions in this book, be aware that if anyone has used the product on your PC before you, they might have moved panes from their standard positions.
To create a project:
The first page of the Project Wizard appears. A wizard is a dialog box with a series of pages displayed in succession, to guide you through a task.
Notice that in the Folder field the folder has defaulted to d:\mfuser\projects and this is extended with \idedemo as you type in the Project Name field. Mainframe Express assumes you want your project folder to have the same name as the project itself, within d:\mfuser\projects.
As was explained in the section Demonstration Applications, we will not accept this default. This demo application is supplied in d: \mfuser\projects\gsdemo\idedemo.
Remember that where we show d: in instructions like this, you should actually enter the drive you installed Mainframe Express on.
On this page you can specify whether you will enter details of the new project, or want it based on an existing one.
On this page you can specify where you want the catalog for this project to be created. The catalog emulates the catalog on a mainframe. In it, you will later specify the physical properties and locations of the data files used by the application. By default, the catalog is created in the project folder.
If you installed any of the Mainframe Express options, such as CICS Option or SQL Option, you now get a page asking which of them it uses. There is a check box for each option installed. This project uses none of them.
(If there is no check box for one of these options, even though you installed it, it may be that you don't have a valid license for it. You should update your license before running any of the later sessions that use that option. When you next load Mainframe Express you will see a dialog box enabling you to do this.)
The project is created, the Project Wizard disappears, and a new window appears. This is the project window, in which all details of your project are displayed. The IDE with the project window is shown in Figure 5-2.
Figure 5-2: The Newly Created Project
The panes may not be positioned exactly as shown here if you've resized or moved them. If you can't see the four tabs marked Files, Workgroup, Catalog, and Spool at the bottom of the project window, move the Output window down - drag its top and bottom borders - until you uncover them.
The four tabs give you different views on the project. Initially the Files View should be at the front - click the Files tab to ensure it is. This view shows you the files in the project, in a tree structure showing their types. At present there are none. You will add them in the next section.
To add your files to the project:
This brings up the Add Files to Project dialog box, where you can select files to add.
Within the project folder are folders for files of particular types: source files are in folder source, data files in data, and so on. We recommend this as the standard to adopt for your own projects.
You should see two files, vsamdemo.cbl and vsamdemo.jcl. If they don't seem to be there, check the following.
The files are added to the project. You'll see the project window being updated. We'll look at this in more detail in a moment.
In a project where you don't want to add all the files, you can select individual files and click Add. The appendix Windows Tips gives advice on selecting multiple files.
This closes the dialog box, and Mainframe Express parses the files in the project for dependencies - that is, it scans them to find out if any other files are needed. In parsing the COBOL source files, Mainframe Express finds references to two copybooks, so it adds these copybooks to the project.
Messages in the Output window show the progress of this parsing. There are only a few messages, but if your Output window is small some may disappear off the top. In this case, a slider appears at its right-hand side. To see the earlier messages, drag this slider up or make the Output window bigger.
The project window is now as shown in Figure 5-3.
Figure 5-3: Project with Files Added
The left-hand pane of the Files View of the project window is a tree view, showing the number of files of each type. For example, the project contains two source files, one a COBOL file and one a JCL file. Dependencies are files referenced in the source files - in this case, the two copybooks. Whichever entry in the left-hand pane is selected (highlighted), files of that type are listed in the right-hand pane. To see all the files in the project, including the two copybooks added by the parsing, click Project.
Whenever a project is updated, it is saved automatically.
Although a project is in effect an organized list of files, you can add only files that already exist. You cannot add a file to the project first and create the file later.
All the source files are now in the project, and it's ready for you to build - that is, compile all the source files. But we'll look at a few more points before we do so.
The catalog emulates the catalog on a mainframe. It gives the physical properties and locations of the data files used by the application. The JCL uses it to assign the logical files referenced by the COBOL program.
You can add the file details to the catalog before or after building. On the mainframe an administrator would probably do this for you. On your PC, you can do it yourself. To add the input files:
The Catalog View comes to the front. It's empty.
At any point when using the IDE, right-clicking brings up a popup menu. This is an alternative to using the pulldown menus or the toolbar. It has the advantage that the popup menu contains only functions appropriate to the object you clicked on.
You can type in lower case. The name is automatically converted to upper case. Generally, those parts of Mainframe Express that closely emulate mainframe features show filenames in upper case.
This displays the Open dialog box, open where you left it at the folder source.
You should see three files.
This puts this filename in the PC Filename field, complete with its path.
The file is added to the catalog. Its details appear in the Catalog View.
A list with many columns like the Catalog View is known in Windows as a list control. They are used a lot in Mainframe Express. If you want help on rearranging the layout of this kind of a list, see the appendix Windows Tips. You might, for example, find it convenient to move the PC Name column nearer the DS Name column so you can see both easily.
If you made any typing errors, right-click on the entry in the Catalog View, and on the popup menu click Delete. Ensure Also delete associated PC File(s) on the dialog box is not checked (otherwise this will delete the file itself), and click OK. This removes the file from the catalog. Then repeat the steps above.
|DS Name||PC Filename||DSORG||RECFM||LRECL|
Remember the project is saved automatically when you update it, so there's no need to save explicitly while making these edits to the project.
You can add output files using the Allocate function, which is similar to Add Existing Dataset but assigns a PC filename for you. However, you shouldn't use Allocate for a VSAM file that you plan to transfer to the mainframe, because the catalog entry it creates includes no key information. Instead you should use the MFJAMS utility, which emulates the IDCAMS mainframe utility. To use it in the IDE, you click the Tools menu, then Interactive AMS, and then select the function you want from the submenu that appears.
However, this can also be done from within a JCL jobstream, and in this demo we have done it this way to save you further typing. Before executing the COBOL program, the JCL runs MFJAMS. It is guided by the file VSAMDEMO.VSAMINIT.DATA, one of the files you added as an input file for the run.
So you don't need to do anything in this section. You will see the files being added to the catalog in a later section when you run the jobstream.
(If you've used certain earlier Micro Focus products, you may be used to using an mfextmap.dat file to map dataset names to PC filenames. The User Guide describes how you can use an mfextmap.dat file in Mainframe Express.)
You can add data files to the Files View. Since they don't take part in the build, there's no need to add the data files, but doing so makes the Files View into a complete list of files needed for the application. We could have added them in the same way as we added the source files, but now they are in the catalog there's a quicker way.
Nothing visible happens, because the change happens in the Files View.
The tree view now has an entry Data(3), meaning there are three data files in the project. If you click this entry, you see the names of the three input files in the right-hand pane.
To build the project:
This builds all the files that have changed since the last time the project was built. Since you haven't built the project before, all the files are built.
The Rebuild All function on the same menu builds all files in the project, whether they need it or not. The Compile function on the same menu just compiles the currently selected file.
This one function carries out the entire build. The correct compiler or translator is automatically called for each source file - in this case, for the COBOL and JCL files. In Mainframe Express, the term "compilation" is generally used for any compilation, translation, conversion or preprocessing. Also, the term "source file" is used for any file that is the input to such a compilation - not just COBOL and Assembler source files and copybooks, but files such as CLISTs and JCL files.
Messages in the Output window keep you informed about the progress of
the build. The build finishes with the message "
with 2 errors, 0 warnings, and 0 notices". Our COBOL source
file contains two source errors.
If you get more than two error messages, check that you gave the correct name for the project folder when using the Project Wizard. Using the wrong folder would prevent the copybooks being found and so cause many errors.
Notice the button on the toolbar. The same symbol appears by Build on the Build menu. Many commonly used functions on the menus have equivalent buttons on the toolbar. It's often quicker to use these. If you leave the mouse pointer over a button on the toolbar for a moment, a brief explanation of that button appears. These explanations are called tool tips. In these sessions we'll sometimes use the menus and sometimes the toolbar.
There is no linking in Mainframe Express (except in Assembler Option, described in later sessions). Once the source files have been compiled, they are ready for execution on your PC. In complex projects the order in which files are compiled is important, since there are file dependencies. For example, a CICS BMS file must be compiled before the COBOL program that uses the copybook it produces. So compilations are always done in this order:
If some messages have disappeared off the top of the Output window, drag the slider up or make the Output window bigger. You will see two error messages: "... Operand SUBSCRIPT is not declared ..." and "...Operand SRT-FILE is not declared...".
A source view window opens, showing the source file with a red cross at the left of each erroneous line, and the cursor by the word where the first error was found. As with all syntax checkers, this may be the erroneous word itself or a little way after it.
You can get advice on fixing the error, as follows.
A window appears, explaining the error and advising how to fix it.
This compresses the source to show just erroneous lines. This is useful if errors are spread through a large source file.
There's no need to save while editing this file, because you only make a couple of changes, and it is saved automatically toward the end of this section. In any large edit, it's a good idea to save occasionally.
If you've used certain earlier Micro Focus software, you may be used to using Alt+F4 to save. In Windows, Alt+F4 closes the current session (see the system menu at the top left-hand corner of any window). If you use it in Mainframe Express, it will warn you of anything that needs saving, then close Mainframe Express.
The cursor moves to the second error. You could simply move the cursor to the error, but again, this feature is useful if errors are spread through a large source file.
"OVR" means that you are in Overtype mode. You need to be in Insert mode so that what you type is inserted before, not in place of, existing text.
This automatically saves your source file, and builds the project again. Only vsamdemo.cbl has been updated, so only this file is rebuilt. The build finishes with "Build finished with no errors".
To save space on the screen, we can close the source view window, as it is not used in the next section.
File functions such as Save and Close operate on whichever window in the IDE has focus, that is, is in front. So clicking Close here closes the source view window, not the project window.
You use the Debug menu for both running and debugging. To run the application without debugging:
The Start Debugging dialog box appears. You use it to specify where execution starts.
This dialog box appears for both running and debugging. In Mainframe Express, running and debugging use the same underlying mechanism. In running an application you are really using the debugging engine with the debugging features switched off. Consequently, running is controlled via the Debug menu, and observations we make about debugging will generally apply to running as well.
The button on the toolbar is equivalent to Run on the Debug menu.
Because this application was written for a mainframe, there is a JCL file to start it off. So we want to tell Mainframe Express to run the JCL file.
The name of the JCL file to run has defaulted to the one you selected.
The Run function displays this dialog box the first time you run or debug the application after loading the project or clicking a different executable file (JCL jobstream, CLIST, or REXX file). Otherwise, it assumes you want to start at the same place as previously, and goes straight into the run. (If you'd prefer it to display the dialog box always, you can click the Options menu, then Debug, and put a check mark by Step/Run displays Start Debugging dialog.)
The application runs. A new window, called the Application Output window, appears. It displays the screen output from the application. This window's title bar changes according to what is currently happening in it - for example, at times it is "Application Output", and at other times it is the name of the JCL job.
The COBOL itself does no screen output, but you see messages from the JCL, showing the progress of the job. It ends with "JOB ENDED". You may have to resize the window to see it properly.
If you see any failure messages in the Application Output window, check your catalog entries to make sure that you typed the filenames and other information correctly. Correct any errors, then try this section again.
To check that it's worked, we'll check that the output files have been created.
The output files have been created and added to the catalog. (The files with extensions starting ax are index files - in Mainframe Express the indexes of indexed files are kept in separate files.)
You can leave the Application Output window visible, as the next section uses it as well.
The Mainframe Express debugger gives you an intuitive way to trace the execution of your code. It provides an extensive set of debugging facilities. It is useful if the application isn't doing what you expect, or if you want to get familiar with an unfamiliar application. We will look at how to start and stop debugging. A later session demonstrates debugging in detail.
In Micro Focus products, debugging is sometimes known as animating. You may come across this term occasionally in the Mainframe Express documentation.
Unlike the Run function, Start Debugging always displays the Start Debugging dialog box.
The Application Output window is cleared and then shows the job steps as the JCL executes. When Vsamdemo.cbl is entered, a source view window showing its source appears again and execution pauses. The window is positioned at the line about to be executed, and that line is highlighted. See Figure 5-4.
Figure 5-4: Starting Debugging of Vsamdemo
We'll step through a few lines of code.
This executes the highlighted line of code. You could alternatively have clicked the Debug menu, then Step. The two are equivalent.
This is the function that single-steps the code.
If you use the Step function without first starting debugging, it starts the debugging for you. Like the Run function, it only displays the Start Debugging dialog box the first time you run or debug the application after loading the project or clicking a different executable file. (You can change this behavior if you want, in the way described for the Run function.)
While debugging, you have extensive features available for examining data and for controlling execution. A later session covers these in detail. You can also edit the source while debugging, but you cannot execute a line that you've modified until you've built again.
We'll now continue without further debugging.
At any point while debugging you can use this function to make the application continue normally without debugging.
The source view window remains visible, but is not updated.
You've now finished getting this application running on your PC. But before we finish this session, we'll look briefly at some ways you can see more details of a project.
Let's look at some ways you can view a project's files.
The Files View comes to the front.
Double-clicking a filename opens a window displaying that file. If such a window is already open, it simply comes to the front.
A window opens, displaying the contents of the data file VSAMDEMO.INQUIRY.DATA.
Mainframe Express contains editors for several types of file. Double-clicking a file's name starts the correct editor for the file type. For a COBOL or Assembler source file, a copybook, JCL file, REXX exec, or CLIST, you get the text editor. For a data file, you get the Data File Editor. For a BMS mapset (used in CICS Option, described in a later session) you get the BMS Painter. We'll see these in more detail in later sessions.
You don't have to go via the project to start an editor. You could use Open on the File menu to open a file, regardless of whether a project is open, and you'd still get the appropriate editor for the file. However, it's usually convenient in Mainframe Express to do everything via projects.
In Mainframe Express, compiling the source files of a project is called building the project. There are many options that you can set in doing this, but usually you can use the default settings.
Let's take a moment to look at how you view these settings.
The Files View comes to the front.
This brings up a dialog box with a number of tabs. What tabs are there depends on which options of Mainframe Express you installed. The options were set according to what you entered when you used the Project Wizard to create the project, but if you needed to change any, you would do it here.
Notice that the function-name includes the name of the selected file. Because it's a COBOL source file, the dialog box that appears has only fields relevant to a COBOL compilation. Earlier, because Project was selected, the function became Build settings for project.
Many settings can be set for an individual file in this way, overriding the project settings. You can also specify settings for particular file types, for example by selecting COBOL in the left-hand pane.
Let's take a moment here to look at one way you can get help whenever you're using Mainframe Express.
A popup appears telling you what the project window is for. Generally, whenever you see the Context Help button, normally or , you can use it to get quick context-sensitive help in this way. Of course, many screens also have a Help menu or Help button which takes you into the main help.
Most windows and dialog boxes in the functions accessible from the IDE have context help. The windows of the IDE itself (such as the Output window) do not. If you click on something that has no context help, you get "No Help topic is associated with this item".
Close the project by clicking Close Project on the File menu. You can't close a project by clicking its button. Doing so closes its window, but the project remains loaded. Closing a project saves it automatically.
You can't close the standard windows like the Application Output window using Close on the File menu, and they do not have buttons (unless you've undocked them). Instead, right-click on the Application Output window and click Hide on the popup menu. Alternatively, click the View menu, then Dockable Windows, then on the dialog box click by Application Output to delete the check mark, and click Close to close the dialog box. You hide these windows rather than close them - if you open them again, they are still displaying what was there before.
If you're planning to go straight on to another session, you can keep Mainframe Express open. Otherwise, either click Exit on the File menu, or click the IDE's button.
Return to the Tutorials Map in the chapter Start Here for the Tutorials and choose which session to go on to next, depending on your interests.
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|Start Here for the Tutorials||TSO Applications| | <urn:uuid:bfc954ba-15bf-4c0e-b773-710ae10b3420> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/MainframeExpress/mx20books/gmmide.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00705.warc.gz | en | 0.91038 | 6,173 | 3.0625 | 3 |
A widespread scheme involves scammers spoofing authentic phone numbers to lull victims into thinking they’re talking with a legitimate government officer.
The FBI issued a warning on an ongoing fraud scheme where scammers use fake credentials of well-known government agencies to take over citizens’ funds.
Fraudsters tend to use an urgent and aggressive tone, refusing to speak with anyone else but the victim. To avoid detection, criminals pretending to be government officials demand that victims’ do not contact anyone else over the issue.
Victim isolation is among the most common tactics employed by cybercriminals. Many bank on victims’ being scared or ashamed. However, notifying officials is the only feasible way to catch the perpetrators and prevent future crimes.
According to the FBI, scammers that impersonate government officials demand payment in various forms, with prepaid cards, wire transfers, and cash among the most common.
Scammers also demand payment via crypto ATMs. Victims are instructed to deposit cash to specified accounts. Interestingly, some victims are asked to mail cash, hiding the banknotes to avoid scanning devices.
Even though scammers always come up with novel ways to terrorize people, some tactics persist over time.
Scammers often claim that the victim’s identity was used to launder money or deal drugs. The accusation is followed by a demand to prove identity by providing a social security number and date of birth.
The victim is threatened with arrest if they refuse to pay a fee for a ‘removal of charges.’ While the claims might sound audacious, scammers are experienced and aggressive in communicating with people who are often caught off-guard.
Other scammers don’t necessarily aim at financial gains but try to obtain personal information that can be used in further attacks or sold online.
Some victims receive text messages from spoofed government numbers with instructions to renew their passport or driver’s license. The victims are asked to provide fraudsters with their ID details.
According to the FBI, different types of scams overlap in some cases.
For example, once a victim of a romance scam realizes they were attacked, a law enforcement impersonator attempts to contact the same person offering ‘help.’
A lottery scam victim might receive a call from a fake law enforcement officer over taxes and fees, further cheating people out of their funds.
In other cases, victims are contacted with a successful government grant application message. The threat actor then proceeds with a request to pay taxes and fees upfront to collect the grant.
The FBI's statement says that no legitimate government official will ever demand payment over the phone as any legitimate investigation or legal action will be done in person or by an official letter.
More from CyberNews:
Subscribe to our newsletter | <urn:uuid:f7aeabdd-ea8f-4606-b98e-b6e85609f2eb> | CC-MAIN-2022-40 | https://cybernews.com/news/beware-fraudsters-impersonate-law-enforcement-to-extort-money/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00705.warc.gz | en | 0.944724 | 575 | 2.59375 | 3 |
With modern computer hardware, you can guess 100,000,000 passwords/sec, using readily available graphics processing units (that number is probably conservative). These computers have specialized co-processors that are designed to do math-intensive calculations for rendering computer graphics. They’re equally good at doing math-intensive calculations for cracking passwords. If you don’t own such a system, you can rent one on-line for a little over $2 per hour from Amazon.
Let’s take the worst case:
A password made up of upper and lower case letters, numerals and punctuation, gives 96 possible characters. A 14 character length password (NIST recommended) gives you 4.8 x 10^27 combinations. Even at 100,000,000 per second, that would take you more than a million years to guess if you had to try every possible combination. Since, on average, you’ll guess the password after trying only half of the possible combinations, we’re down to 500,000 years.
If you can’t wait that long, here are some strategies to make things go faster:
Those numbers presume a real random password. Unfortunately, we humans (as Randy Munroe points out) don’t do well with random characters. We can’t reliably remember them. So we tend to come up with non-random passwords that we can remember.
It’s well known that most people pick easy to guess passwords, even if they’re trying not to. And it’s also true that the more often you have to change your password the worse your choice will become. If you have to change your password every few weeks or months, you’re more likely to choose an easily guessable one.
Depending on whom you ask, there are about 250,000 words in the English language. Most people base their passwords on one of them. If you’re forced to add a number or punctuation character, you most likely add it to the end of the word. You might use “leet speak” (‘4’ for ‘A’; ‘1’for ‘L’; ‘3’ for ‘E’; etc.) character substitutions too.
Two digits in front or after the word increases the search space by 10000 times. Adding leet substitutions increases it 16 times.
With those additions to standard words we get 2.5 x 10^5 x 16 x 10^4 = 40 x 10^9 possible passwords.
So, using my rented graphics-rendering computer, searching every English word, with one or two digits in front or after it, using any combination of “leet” substitutions would take less than 10 minutes to crack (40 x 10^9 / 10^8 = 400 seconds). And I still haven’t used up my $2.
Let’s make it just a little more complicated: I’ve seen many passwords that vary the letter case, usually adding one or two capital letters. Assuming an average word length of 6 letters, that increases the time by a factor of 36. That ups the cracking time to around four hours.
So for a little less than $10, I can find every password that’s based on an English word, that adds one or two digits, one punctuation character and up to two mixed case letters. In my experience visiting lots of client systems, I’ve covered 90% of the passwords in use. And it’s only lunchtime.
There are many additional strategies to search for more complicated passwords. A simple one is based on the idea that we tend to make up passwords that can be pronounced – that is, ones that use standard English consonant blends and diphthongs. For example, if I want to invent a word, I’m much more likely to come up with “grooz” than “zmloqk.” The first uses standard blends (“gr”) and standard diphthongs (“oo”). “Zmloqk” uses nonstandard combinations. We’re much less likely to choose those, and a password cracker can take advantage of that fact.
Another oft-touted strategy is to use a phrase or combination of words. This is a good idea if you do it right. With my same rented computer, I can guess 3 word combinations (“blue pickled trucks”) of the 50,000 most common words in about a day and a half. Four words, however, increases the cracking time to almost 200 years.
In the end, we’re stuck with this dilemma of human nature: we can create complex passwords that are impossible to guess, but we can’t remember them. On the other hand, most passwords we do come up with can be cracked with a few hours’ time and beer money. Keep that in mind the next time you choose a password. | <urn:uuid:5217a265-2ca4-471f-8216-77e5ca423d8a> | CC-MAIN-2022-40 | https://netcraftsmen.com/how-hard-is-it-to-crack-a-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00705.warc.gz | en | 0.921658 | 1,054 | 2.78125 | 3 |
Australians report that 73 percent of scams are initiated by a phone call or SMS message, compared to just 14 percent of scams being received by email, according to data from Australia’s Scamwatch scheme. Scamwatch figures covered frauds that cost Australians a combined total of AUD324mn (USD222mn) for 2021, an 84 percent increase since the previous year. Spread across the population, the figures are equivalent to each Australian being scammed out of AUD13 during 2021. However, this is still thought to only represent a fraction of all scams when compared with other information from financial and government institutions. The Australian Competition and Consumer Commission (ACCC) estimates that only 13 percent of victims tell Scamwatch about the frauds they have suffered.
There was no simple relationship between the communication methods used by scammers and the amount they stole. 50 percent of all reported scams were instigated by voice call and the collective value of these scams was AUD100mn (USD69mn), which was 31 percent of the total reported to Scamwatch. This contrasted with the 8 percent of scams conducted through websites or social networks, which collectively cost victims AUD107mn (USD73mn), a third of the total value of scams. The value of SMS scams were especially low relative to the frequency with which this communications method is used by scammers; 23 percent of scams came via SMS, and their collective value was AUD10mn (USD7mn).
The ACCC’s annual scam report praised Australian telcos and the Australian Communications and Media Authority (ACMA) for the efforts made to reduce scams.
It is fantastic seeing the results of the important work undertaken by the ACMA, telecommunications industry and other agencies to combat scams. The Reducing Scam Calls Code has led to a reduction in phone scam reports to the ACCC of almost 50% in 2022.
This massive reduction in the number of scam calls received by Australians raises yet more questions about the US strategy for reducing nuisance robocalls. It is common for US authorities to seek support for their proposals by highlighting how many nuisance robocalls are generated by scammers. However, US consumers have seen no meaningful reduction in the number of unwanted robocalls they receive, despite massive spending by US telcos, as mandated by the Federal Communications Commission (FCC). When the FCC and ACMA signed an anti-robocall agreement in 2021 it seemed that the US was lobbying Australia to follow its lead, but now it appears the Australians should explain their methods to the Americans.
The ACCC report on scams during 2021 can be obtained here. | <urn:uuid:7940791c-00d8-4978-8ff8-c16f1073d7bc> | CC-MAIN-2022-40 | https://commsrisk.com/half-of-all-australian-scams-begin-with-voice-calls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00705.warc.gz | en | 0.972115 | 530 | 2.703125 | 3 |
Every business, no matter its size or sector, has been impacted by the persistent lack of cybersecurity professionals. The cyber workforce gap is estimated at 2.72 million professionals, and (ISC)2 predicts that the global cyber workforce needs to grow by 65% to effectively protect organizations’ networks.
What is causing the cybersecurity skills gap? The reasons range from a lack of formal training and a negative perception towards security roles, to heightened stress levels within cyber teams. But no matter the reason, the cyber skills gap can be detrimental.
The impact of the cybersecurity skills gap
The gap directly weakens the cybersecurity posture of businesses, causing misconfigured systems, delayed patches, rushed deployments, insufficient risk assessments, and a general lack of threat awareness. These vulnerabilities make networks far more likely to be hit by cyberattacks, particularly when those attacks rely on human error. Over the last 12 months, 80% of organizations worldwide have experienced at least one breachthat can be directly attributed to an internal lack of cybersecurity skills. And 64% of these breaches resulted in the loss of revenue, recovery costs, or fines.
How are global governments bridging the cybersecurity skills gap?
With the cyber skills gap showing no sign of slowing, governments and education organizations around the world are stepping in. The US Department of Labor announced that it will partner with the White House and the US Department of Commerce to run a 120-day Cybersecurity Apprenticeship Sprint. The program aims to attract, train, and retain a diverse cybersecurity workforce that will be able to better protect the country’s critical national infrastructure and strengthen the economy.
(ISC)2 ‘s 100K in the UK initiative is also providing 100,000 UK residents access to its entry-level cybersecurity education and certification for free. Since the program’s launch, (ISC)2 has opened this initiative up to 1 million additional cyber career pursuers worldwide with the intention of forming new pathways for entrants into the field. In Australia, Microsoft has partnered with AustCyber to design a traineeship program that combines formal training with on-the-job experience. The program supports Australians of all ages and backgrounds who are looking to build a career in the cybersecurity industry.
While governing bodies are focusing on long-term education programs to attract and train the future workforce, it doesn’t provide immediate remediation for the issues being faced today. The techniques used by hackers are constantly evolving, and attacks grow in both number and sophistication every year. Therefore, even as we educate enough professionals to gradually fill the skills gap, their training will need to be constantly revisited and reassessed, at the time and expense of their employers.
What can your business do?
- Identify your own skills gap
The first step in bridging the cyber skills gap is identifying what and where your problems are. By conducting activities like penetration testing, you can identify skills gaps and potential issues with existing employees who need to be upskilled or a lack of professionals in general.
- Improve awareness and education
While the most obvious way to patch these gaps is with recruitment, the scale of the talent shortage today means that hiring new positions can take time. Retaining existing staff should be equally as important to your business. To keep your networks protected from evolving cyber threats, continuous education and training is vital. Cybersecurity should be integrated into your entire organization, even among non-technical employees.
- Invest in the right tools
To help alleviate the burden of cybersecurity from human employees, find dedicated technology that automates or outsources cyber processes. After first establishing your needs and markers of success, find tools that cater to your vulnerabilities specifically, and use third-party expertise to augment your own cyber posture.
However simply throwing multiple, disparate cybersecurity tools into the mix won’t help. The average business now typically has between 20 and 70 cybersecurity solutions, and managing a software stack this complex often results in alert fatigue. Alert fatigue not only affects employee focus and causes an increase in missed cyber threats; it also heightens workplace stress, which leads to higher staff turnover. Avoid alert fatigue by ensuring that the tools you use are easy to manage and versatile, eliminating the need for multiple, overlapping technologies.
Learn how Walt & Company, a California-based PR Agency addressed their cyber skills gap through security awareness training and vulnerability testing. Watch below:
Our CleanINTERNET service proactively protects organizations from known cyber threats identified by the global threat intelligence community. Combining 3,500 threat feeds, organizations are shielded from 99% of all known cyber threats, creating a Zero Trust network environment. Our elite team of Full Spectrum Analysts go far beyond detection, providing your security team with actionable real-time protection through Advanced Threat Detection. This helps to alleviate both the burden of alert fatigue for your cyber team, and of hiring during a talent shortage. With experience securing some of the most sensitive networks at the DoD, the NSA, the CIA, and the White House, we provide the experience and skills you need to protect your network, your customers, and your reputation. | <urn:uuid:2bc90173-5604-4ba2-9061-a57ca0f8d14e> | CC-MAIN-2022-40 | https://www.centripetal.ai/blog/how-can-businesses-help-close-the-cybersecurity-skills-gap/?doing_wp_cron=1664983847.1469430923461914062500 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00705.warc.gz | en | 0.942281 | 1,035 | 2.921875 | 3 |
IP security cameras and other security devices are by their very nature connected to the internet. That’s what lets users access them remotely to check in on their business, and what lets manufacturers update device software without having to make a house call. But this feature can also be their Achilles’ heel. When not secured properly, any camera or access control device in the so-called Internet of Things (IoT) can be accessed remotely by just about anyone, not just those with whom you want to share access. Similarly, unencrypted communications between a server and client application, or out-of-date firmware can all be exploited by cybercriminals, potentially putting an entire organization’s network at risk.
And that’s a big problem for the physical security industry.
According to industry analyst firm, Gartner, by 2020 more than 25 percent of cyberattacks in enterprises will involve IoT devices. And yes, that includes the very devices that are supposed to help keep us safe. More than 60 percent of cyberattacks are currently on small to medium-sized businesses and small businesses are particularly vulnerable to these threats. Sixty percent of small companies are unable to sustain their business beyond six months following a major cyberattack.
Attacks on large businesses are also enormously expensive. According to a 2018 study by IBM and the Ponemon Institute, the average data breach costs companies $3.86 million and large-scale breaches can surpass $350 million.
You simply cannot afford to take any risks when it comes to protecting your physical security system against cyberthreats. The good news is you have help in the fight. Reputable physical security manufacturers and software developers have established a multitude of ways to protect against cyberthreats. And those that are most trusted don’t just stop there. They literally “attack” themselves in an effort to determine if their products really provide the protection they say they do. Another key partner that can help you protect against cyberthreats: trusted systems integrators who are in the field recommending and installing these physical security solutions.
How do cybercriminals gain access to a security system?
A poorly secured camera, unencrypted communications between a server and client application, or out-of-date firmware can all easily be exploited by cybercriminals. Ransomware attacks are particularly costly, and have been known to target systems running common, but outdated software.
All too often, people are the weakest link when it comes to cybersecurity breaches. Employees not changing default passwords on IoT devices is an easy way for opportunistic cybercriminals to gain access to your system. Brute force attacks consist of criminals guessing passwords, packet sniffing captures network traffic, and man-in-the-middle attacks eavesdrop on communications between two systems, using the gained information to their advantage.
Most physical security solutions are a work in progress with new devices being added to expand the system or to replace outdated or broken products. The process of adding new equipment – perhaps from a different manufacturer with less secure standards – is another opportunity for a vulnerability.
Emboldened cybercriminals may have increased the scope of their attacks, but that doesn’t mean you are defenseless against cyberattacks.
What elements must a cybersecurity solution have?
One of the most important ways to combat cyberthreats is with a plan. Companies must develop training and educate their workforce as to the importance of best practices and the diligence in adhering to company policy. Choosing a systems integrator that recommends only the most trusted manufacturers and emphasizes the importance of cybersecurity is a good start. Together, you’ll need to develop a solution that implements multiple layers of cybersecurity including encryption, authentication, and authorization to your critical business and security systems.
Encryption is the process through which data is encoded so that it remains hidden from or inaccessible to unauthorized users. It helps protect private information, sensitive data, and can enhance the security of communication between client apps and servers. When your data is encrypted, even if an unauthorized person, entity, or cybercriminal gains access to it, they will not be able to read or understand it.
Authentication is the process of first determining if an entity-user, server, or client app is who or what they claim to be, followed by verification of if and how that entity should access a system. Depending on the setup, authentication can occur on either the client-side or server-side, or at both ends. Client-side authentication uses username and password combinations, tokens, and other techniques while server-side authentication uses certificates to identify trusted third parties. Two-factor authentication refers to two forms of authentication used in combination. Authentication is an important tool for keeping your data from getting into the wrong hands. It prevents unauthorized access and ensures that your security personnel are, in fact, the ones accessing your system when they log in. This means hackers can’t pretend to be a security server in order to take control of, manipulate, or copy your valuable and sensitive data.
Authorization is the function that enables security system administrators to specify user or operator access rights and privileges. Administrators restrict the scope of activity on a system by giving access rights to groups of individuals for resources, data, or applications and defining what users can do with these resources. When administrators manage what their personnel can see and do, they are ensuring the security of the data transmitted and stored within the security system. This is a key way to increase the security of the system as a whole, as well as enhance the security of the other systems connected to it.
You can never be complacent when it comes to cybersecurity
With almost daily reports of another hack or security breach, many are starting to suffer from cyber security awareness fatigue. However, nobody can afford to become complacent in the war against cybercriminals. Once you’ve strategized and invested in a cybersecurity strategy to protect your physical security investment, it’s important to remain vigilant.
1. Only choose trusted and reputable security product manufacturers who are committed to protecting your organization from cyberthreats. There are a number of governmental and organizational compliance requirements when it comes to information protection and privacy. Be sure to choose a company that takes these requirements seriously.
2. A company that’s serious about cybersecurity will also conduct its own penetration testing. Penetration tests should be done on a recurring basis to catch any vulnerabilities that could have been missed during product development.
3. When working with a systems integrator to develop or maintain a physical security solution, it’s important to share your concerns about cybersecurity at the onset. A systems integrator must consider cybersecurity a top priority and should only recommend products from trusted manufacturers who are also committed to protecting your system.
4. To mitigate the financial risk of cyberattacks, some companies are also turning to cyber liability insurance. It’s a relatively new type of coverage offered by insurance companies to protect businesses against Internet-based threats and data breaches. While not a “get of our jail free” card, cyber liability insurance will give integrators peace of mind and allow companies to access funds to manage a cyberattack response and keep the business running.
Cybersecurity is becoming one of the top business risks for organizations of all sizes. Everyone has a role in protecting your physical security system from cyberattacks. Be sure to choose trusted vendors who use multiple layers of defense such as encryption, authentication, and authorization, as well as penetration testing. Only work with systems integrators who are committed to providing continuous protection against cyberthreats. The success of your business may depend on it. | <urn:uuid:6045393c-5ff2-4a92-96cf-b2d8bdb92936> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/cybersecurity-how-secure-is-your-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00705.warc.gz | en | 0.948845 | 1,551 | 2.546875 | 3 |
In 2017, the UK government announced the ‘Made Smarter Review’, a proposal for Britain to lead the world in a new industrial revolution. The ambitious review, which runs alongside the UK government’s Industrial Strategy that sets out a long term plan to boost productivity in British industry, proposed creating innovation hubs and a national organisation that could turn Britain into a global Industry 4.0 brand.
The opportunity here is immense; the introduction of automation within the manufacturing industry it could inject £455 billion into the UK economy, boost growth by 3 percent per annum and create a projected 175,000 skilled jobs.
It is predicted that this industrial digitization will lead to a shift in production and the creation of a digital industrial supply chain where products are developed, manufactured, and monitored in real-time through a single, integrated process. However, to achieve this, Industry 4.0 will require a revolution in industrial communications to create continuously ‘live’ data sharing capabilities between machines. This will enable vital data, from sensor measurements to control signals, to be continuously exchanged in real-time so that all critical industrial machinery, from robots to 3D printers, can be instantly accessed and controlled remotely by humans.
With autonomous machinery making critical components such as railway infrastructure or aircraft engines, engineers will need to be able to remotely monitor and maintain them to instantly detect and fix software faults. Continuous ‘live data’ will also be critical to enable ‘robot workers’ to operate in seamless synchronicity, continually choreographing their actions in response to live location or measurement data from other robots.
There are also plans to radically decentralise manufacturing through an ‘Internet of Thinking’ — based on a ‘DIY’ sensor network that can autonomously analyze information, rather than sending it for remote analysis. This will mean that if an industrial robot notices its own equipment malfunctioning, it will be able to recognise what it needs to do and self-correct any faults. However, such systems will need external human supervision in case of faults that they fail to detect.
Cybersecurity teams will also need to be able to remotely monitor for malicious interference and intervene in manufacturing machinery to avert industrial sabotage by cyberattack. For example, researchers have previously shown how to hack into 3D printer files to make a drone crash by altering the design specifications for the propellers.
The need for data to be digitally omnipresent across the industrial IoT ecosystem the moment it is generated actually mirrors the thinking behind the traditional ‘IT help desk’. It reflects the need for a genuinely low-latency, live connection and two-way exchange of information that enables the sharing of everything from audio and video to images and text among millions of devices.
However, there are currently no open standards for industrial remote communication that allows machines from any vendor to share live data and enable remote human intervention across all manufacturing equipment and components. This is because major manufacturing companies currently have proprietary protocols in place for the remote access to machinery that only work with their own-branded machines. If this system is rolled out across Industry 4.0 efforts, it will create a fragmented industrial IoT landscape where machines aren’t able to communicate with each other and control rooms won’t be able to exercise the same degree of control over all machinery in operation. This undermines the vision of fully digitally-integrated end-to-end supply chains, and it could also make true automation impossible as this would require real-time remote supervision of all machinery.
Crucially, it could also jeopardise cybersecurity protocol by making some machines inaccessible to cybersecurity experts. At a time where vulnerabilities are constantly being discovered in devices and IoT is becoming more of a target for cyberattackers, it is critical that the industry makes it as easy as possible for security professionals to fight back. It is only when they can access any machine that they are able to mitigate the risk.
The only way to achieve the UK government’s vision is to create a safe, open platform for end-to-end data exchange across the entire industrial supply chain. This will enable safe and low-latency, multi-directional communication and crucially, it would mean that supply chain connectivity is fully ‘future-proofed’ and can seamlessly incorporate any new industrial IoT machines that emerge in the future. It would also make technicians, cybersecurity personnel, engineers and factory staff digitally omnipresent across a factory floor of diverse and varied equipment, and enable them to instantly remote into any machine to fix faults.
If we are to fully realize the vision of an interconnected industrial supply chain worldwide, we must change the way we manage data. We must challenge rival industrial manufacturers to convene around an open standard approach to create a truly connected Industry 4.0.
Adam Byrne is CEO of RealVNC. | <urn:uuid:37a9bf2a-1a62-46d2-b7d8-1df5718d8bb3> | CC-MAIN-2022-40 | https://www.mbtmag.com/business-intelligence/blog/13247341/a-lack-of-big-data-open-standards-is-undermining-industry-40-ambitions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00705.warc.gz | en | 0.933919 | 985 | 2.609375 | 3 |
We are in a race for quantum supremacy. Google, IBM, and researchers across the globe are working on solving the most complex of computations, computations that could only be solved by the most advanced quantum computers.
copyright by venturebeat.com
Quantum computers are very similar to the computers in households today — only much more powerful. So powerful, in fact, that they can solve in milliseconds problems that would take a normal computer thousands of years to solve. For several reasons, quantum computers could prove quite beneficial with more widespread application. They could help with a multitude of complex issues, from things like creating solutions for climate change to organizing massive sets of data about health care.
Solving real-word problems
“As companies such as Microsoft, Google, and IBM continue to develop technologies such as this, dreams of quantum computing are becoming a reality,” writes Daryl Harrington for InfoWorld . “This technological innovation is not about who is the first to prove the value of quantum computing. This is about solving real-world problems for our future generations in hopes of a better world.”
In the example of the Internet of Things (IoT), where billions of devices are constantly connected, we are inundated with data. According to IBM, we create 2.5 quintillion bytes of data every day — and that number is increasing. This data is invaluable, but it is so abundant that we are unable to analyze it. Quantum computers could assist in our understanding of the data we’re generating, but only with the help of artificial intelligence. “Machine learning, the field of AI that allows Alexa and Siri to parse what you say and self-driving cars to safely drive down a city street, could benefit from quantum computer-derived speedups,” writes Mark Anderson for the IEEE .
Quantum Computing and AI
The hybrid study of quantum computers and artificial intelligence, or quantum machine learning, is still in its very early stages. Many of the machine learning algorithms are still theoretical and require large-scale quantum computers to be tested. Still, the marriage between the two has already proven fruitful. How far has AI come? Artificial intelligence has integrated itself, in some form, in many areas of our everyday lives. From algorithms that sort our emails to machines that best us in video games, we seem to live in a world populated with smart machines.
Why do we need quantum computers when our best chess players are outmaneuvered by a machine? Why power up automation software that can run autonomous cars? While it may seem that artificial intelligence has already reached its zenith, in reality, we are far away from creating truly intelligent machines capable of solving difficult problems. […] | <urn:uuid:4b6426af-8f33-483f-a88f-f224a6b08b9b> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/10/26/ai-and-quantum-algorithms-together-can-compute-a-better-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00105.warc.gz | en | 0.958853 | 550 | 3.21875 | 3 |
CSIR working on improving biometric ID techniques for minors
The Council for Scientific and Industrial Research (CSIR), an African research and development organization, is developing a biometric identification system that can obtain biometric data from children with greater accuracy.
In a new video, CSIR detailed the Department of Science and Technology-funded, two-year project in which researchers will be collecting ear, iris and fingerprint biometric samples.
Using this data, the organization aims to develop a system that can determine or verify the identities of minors throughout their childhood using reference biometric samples captured during their infancy.
CSIR researchers have identified the fingerprint, the iris, and the outer ear shape as suitable biometrics for the project.
The researchers will determine which of the three biometric modalities are best suited for the system.
Though the iris is the most unique and permanent biometric, there are a couple of issues to consider when acquiring iris scans from very young children, particularly infants. The greatest challenge is that infants are often sleeping and therefore have their eyes closed.
“The ear is a new avenue that the CSIR is looking into. The pattern or external shape of the ear is fairly consistent as the child grows,” said Kribashnee Dorasamy, researcher at CSIR. “We will be looking into 2D to 3D matching. The 3D captioning allows us to extract more information about the ear itself.
“For the capturing of the ear itself, we use a 3D scanner and the scanner basically projects patterns on the ear and this captures multiple scans. These scans are stitched together to create a full 3D representation of the ear. To give us more flexibility around the head, we created a mock motorized mount that will move the 3D scanner and capture multiple scans automatically where we can stitch the image to create the full 3D representation of the ear.”
Fingerprints also have their share of challenges when it comes to identifying children. Although fingerprint patterns remain constant over time, the relative pattern itself changes in scale as the child grows.
“Existing technology did not take that into account when it was being designed because it was designed particularly for adults,” said Tshegofatso Thejane, a researcher at CSIR. “So we as the CSIR are looking to develop technologies and algorithms that can at first, find a better way of acquiring fingerprints from children, and also, model the growth of a child such that the scaling issue does not affect the accuracy that we get.”
Last month at ID4Africa, Anil Jain, professor of computer science at Michigan State University, revealed study results that show fingerprints can be used to effectively verify the identity of children enrolled as young as six months of age. | <urn:uuid:bb35577b-941b-4c95-a42b-a4db2bc627a8> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201705/csir-working-on-improving-biometric-id-techniques-for-minors | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00105.warc.gz | en | 0.939093 | 578 | 2.953125 | 3 |
As everyone knows by now, we live in an age that is heavily dependent on the internet. The World Wide Web has become a huge part of our lives over the years, and as anyone would agree, it has made a tremendous positive impact on humanity. For instance, it has made our lives easier, more enjoyable, and more comfortable. Today, you don’t have to spend hours in the library going through a giant pile of books just to get a piece of information. Depending on what you’re looking for, you can get it from your favorite browser and search engine without even taking off your pajamas!
Moreover, you can study various courses from home or the workplace and get the much-needed credentials to propel your career forward. This is not to mention shopping, socializing with friends and strangers, and getting your good dose of entertainment. Most importantly, you can earn money without leaving the comfort of the four walls in your home or even while on the go, thanks to the internet. Well, we can go on and on about the advantages of the web, but that would seem rather redundant to many.
Nonetheless, internet users are, more often than not, prone to becoming victims of cyberattacks, cybercrime, and other unpleasant issues in-between. Cybersecurity has been a major drawback to the existence of the internet for as long as anyone can remember. From cyberbullying to hacking, identity theft, and financial fraud, a lot of dark stuff and shady dealings take place over the web. In this piece, we will highlight some of the biggest threats to online security you should be aware of as an internet user.
This is a phrase that most of us are synonymous with. If you’re an avid internet user, you have probably been warned about using public internet connections when making personal transactions online, especially when you are not using any VPN. Traditional hackers were mostly concerned with getting your bank account details and transferring your cash into their own. This has, however, advanced into something much bigger. Modern-day hackers are known to target information like usernames and passwords. These are apparently easier to acquire and more valuable to the hackers of today, sometimes compared to mere bank account information.
With the advancement in technology and knowledge, this has become a much bigger threat than it used to be. Someone with the know-how can easily hack into any of the devices you use for online access and do as they so wish with your private information. The results can be detrimental to your finances, and, as observed lately, to your reputation. The good thing, however, is that software programs are always available to prevent hacking. Well, most of them work by assessing your vulnerabilities and recommending a solution. In a recent review of penetration testing vs bug bounty, it is recommended to pick penetration testing as your first choice. If the assessment comes out empty from two different software or companies, you can then use bug bounty for cross-checking purposes.
Another thing about hackers is that they will even get into your devices for the fun of it. In the end, they could tarnish your name/brand using your social media accounts, blog, or website. This is why you need to be extra careful and keen on ensuring you are safe when going around your business online. Other than assessing vulnerabilities and employing anti-hack software, some methods you can ensure safety from hackers include the following:
- Utilizing network firewalls and encryption
- Utilizing Network VPNs
- Employing data access security tools
- Having a procedure to allow and deny access (such as two-step verification)
- Providing user awareness and training
We have heard of people whose information was used for illegal activities online. This shouldn’t come as a surprise because phishing is one of the biggest threats to online security at the moment. Phishing is usually an attempt to gain sensitive information from an individual by posing as a trustworthy contact. It goes a notch higher into what is termed as spear phishing, a highly targeted attempt to obtain sensitive info from an individual.
Spear phishing, in most cases, seems so legit at the look of it. You receive an email from a “bank” or an online service asking you to urgently share your information or make certain payments. The emails and texts often utilize faultless wording and genuine logos that you will hardly differentiate the sender from the real and original companies. You, therefore, need to make sure you are keen to spot them from a mile away. Luckily, the following tips can help.
- Be smart enough to know that companies do not usually ask for sensitive information. If at all they do, it won’t be via email.
- Knowing you are at risk of being the next victim, be suspicious of emails you didn’t expect or ones you don’t trust.
- Make use of anti-malware software, they come in handy.
- Keep spam filters turned on at all times. However, visit the spam inbox once in a while in case innocent mail is trapped in there.
3. Social Engineering
Social engineering isn’t that different from hacking. The major difference is that the attack is designed in a way that you harm yourself unknowingly. The main idea in social engineering is deception. As an online user, you are tricked into taking some actions and evading others such as bypassing security measures or giving out your personal information. Basically, the user is responsible for letting the hacker into their system without their knowledge! These cases haven’t been existent in the last couple of years but the rate at which these attacks are on the rise makes it one of the biggest threats.
The worst part is that even the best cybersecurity systems will rarely protect you from social engineering attacks. You will be tricked into taking actions that will make the security systems you have in place cease working or at least fail to prevent the attack. You, therefore, are responsible for your own security on this one. Cease using shortcuts to get things done. Do not bypass the security measures recommended by the security systems you have in place at any time. If you do, you won’t have anyone to blame. As you now know, this is a game of deception. Be on the lookout.
This is another very serious threat, especially to firms or people whose livelihood depends on their online activities. What happens is that your network is attacked and your computer system is inaccessible. This means you can’t do anything on your computer. As mentioned, this translates to losses to people who depend on the work they do online. After your system has been locked, you receive an email asking you to pay up a ransom so you can receive an unlock code to decrypt the malware holding your system. You may assume this is the only loss, but in the real sense, it’s just the tip of the iceberg!
To begin with, you will have lost productive time trying to sort out the issue. At the same time, there is a high chance that you will lose data. Data loss is the most significant loss for any business. This threat has only come up recently, meaning that it could take time before one comes up with a way of dealing with it for good. However, don’t be hopeful for a solution when it comes to technology. As advancements are made in cybersecurity, the threats are also advancing. Ransomware attacks have been on the rise across the globe, and that is why you need to take some of these measures to be on the safe side:
- Staff awareness- Make sure that you and your staff are well informed on what to do when faced with unsolicited emails especially those requiring prompt responses.
- Malware protection- Have good antivirus and malware protection software installed.
- Software updates- Ensure all your apps, especially the malware protection software, are up to date
- Data backup- Make sure to backup all your data in case you are a victim and end up losing important data.
5. Outdated Hardware
You should know that not all cybersecurity threats come from software. Sometimes the hardware your computer is using can be the reason you are at risk of all that has been mentioned above. While the software will be updated every day, your software might not be able to keep up. If the hardware you bought five years ago is the same that your computer is using today, one thing is for sure. You can only imagine how many times security software is updated through the five years.
Sometimes your (outdated) hardware may not allow updates with the latest patches and security measures. When your device can only accommodate older versions and types of malware protection software, they lack the updates meant to protect you from recent threats, thus creating a major potential vulnerability. It can be an expensive venture to keep updating your hardware, but it is all worth it for the sake of your cybersecurity. Just as you are keen to make sure you are utilizing the latest software, you should make sure the hardware you are using is updated as well.
6. Cloud Vulnerabilities
In the earlier days, data storage was a huge pain in the neck. The available storage disks at that time had limited capacity. They were also larger, meaning that they occupied a lot of physical space too. If you run a company, you would need to have more than a few hard disk drives and other available physical storage devices to ensure that all your data is stored and appropriately backed up. Today, however, we are in the era of cloud storage. It’s a convenient way of storing data and avoiding having to carry around several hard-disks.
Well, the advantage of this technology happens to also be its main weakness. While it provides an easier way out of the tussles of data storage, it also makes it easier for hackers to access any information they want from your cloud account if they manage to penetrate it. Here, some common threats include account hijacking and Denial of Service (DOS). The two are designed to prevent companies and individuals from being able to access their data.
Again, while one might argue that cloud storage is the best option for data management, it is important to note that no technology can eliminate vulnerabilities completely. This, therefore, means that whether you are using cloud technology for the storage of data or the traditional methods, a holistic approach would be the most effective. For instance, you could think of insurance as part of the cyber risk management plan. It will also not go unmentioned that having different cloud storage accounts will go a long way.
7. Security Patch Management
When a software program is introduced into the market, it is undergoing continuous development. Time after time, it has to undergo updates to fix issues that have been discovered while in use. Some of the issues that are fixed are meant to increase the security of the software and protect the users from cyber risks. This is why you are always advised to be on the lookout for newly released software update patches.
Not staying up-to-date with these patches makes the company vulnerable to security breaches. Attackers are always on the lookout for any software vulnerabilities and then launch cyber-attacks from that point. If for example, your software provider notices a point of weakness in their software, they will work on it and release the updated patch. If you fail to update yours you are left exposed. The lack of utilizing these software update patches have made the risk of cyber attacks to be on the upward curve. This is why, when an update is made on a globally used software, say Microsoft Windows, it has to be announced globally to make sure users are not at risk.
Cybersecurity is as important as the internet itself. Hackers, identity thieves, and fraudsters are always on the lookout for the next “meet” to feat on. The above are just a few things you should know about online security. | <urn:uuid:84ebf84c-0e49-4222-b045-bd6a66b520b0> | CC-MAIN-2022-40 | https://gbhackers.com/what-are-the-biggest-threats-to-online-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00105.warc.gz | en | 0.961984 | 2,445 | 2.75 | 3 |
Recursion and the Alternatives
March 9, 2005 Ted Holt
The code for this article is available for download.
Recursion is usually defined as the ability of a process (a program, a subprocedure, and so forth) to call itself. It is a fun topic to write about and a fun technique to use in programming. Recursion often simplifies the effort required to code a task in a computer language. However, recursion is not without its drawbacks. In this article, I hope to help readers determine when recursion is appropriate and to suggest alternatives to use when recursion is not suitable.
An Example of Recursion
To illustrate the concepts that I present here, I have created a miniature bill-of-materials database. Those who are not familiar with bill of materials (“bill” for short) might think of a bill as the list of ingredients in a recipe, but what you are making might be a can of soup or a car instead of a cake. I like to use bill of materials as an example because it is practical, unlike the factorial and exponentiation examples commonly found in computer science texts.
A bill of materials is recursive by nature, since a component of a manufactured good may consist of components of its own. A green salad consists of green leafy vegetables, toppings, and salad dressing. The salad dressing consists of oil, vinegar, spices, and the like. Some of the toppings, such as croutons, consist of more than one ingredient. One of them is bread, which is also made of various ingredients.
My example structure file, STRUCT, contains two fields–parent and component. In a real bill-of-materials application, the file would have other fields, but I wish to keep this example as simple as possible. Notice that some items are parents in some records and components in others.
PARENT COMPONENT A10 C32 A10 F98 A10 K22 A11 C31 A11 D13 A11 D18 A11 F23 A12 N45 A12 G81 A15 A19 A15 C97 A17 C31 A17 F98 A17 K23 A19 A12 A19 B17 G81 C32 G81 C97 Z09 A19 Z09 M35 Z09 C31
Look at the structure for A15. It contains an A19, which contains an A12, which contains a G81, which contains a C32.
Let’s add another file to the illustration. This is the ITEM file, which describes the items. In our example, there are only two fields–item number and item class.
ITEMNBR ITCLASS A10 20 A11 20 A12 20 A15 20 A17 20 A19 20 B17 05 C31 00 C32 00 C97 11 D13 05 D18 11 F23 11 F98 15 G81 11 K22 17 K23 15 M35 15 N45 17 Z09 20
For the program example, we are primarily interested in two classes. Class 00 indicates raw materials. Class 20 indicates items that we sell (end items). Logical file ENDITEM includes Class 20 items only.
A R ITEMREC PFILE(ITEM) A ITEMNBR A ITCLASS A K ITEMNBR A S ITCLASS COMP(EQ '20')
Finally, we’re ready for some program source code. Our challenge for today is to find the raw materials used in each item we sell. For items A10, A11, A17, and Z09, that’s easy, because their raw materials are found at the first level in the structure file. Or are they? It turns out that Z09 uses two raw materials, only one of which is at the first level. The raw materials for items A12, A15, and A19 are also buried in lower levels. How will we find them?
We can use recursion. The following RPG program has a recursive subprocedure, Chase, which follows a bill of materials down level to level until it finds a class 00 item.
The program reads the logical file of Class 20 items and calls the Chase routine for each item. Chase accepts two parameters–the end item and the parent item whose bill is to be exploded. When Chase finds a raw material, it writes a record of the end item and raw material to file ITEMSEARCH.
H dftactgrp(*no) actgrp(*new) option(*srcstmt: *nodebugio) FEndItem if e k disk prefix(EI_) rename(ItemRec:EIRec) FItem if e k disk prefix(I_) FStruct if e k disk prefix(S_) FItemSearcho e disk prefix(SRCH_) D Chase pr D EndItemNbr like(I_ItemNbr) value D Parent like(I_ItemNbr) value C dow '1' C read EndItem C if %eof() C leave C endif C callp Chase (EI_ItemNbr: EI_ItemNbr) C enddo C eval *inlr = *on P Chase b D Chase pi D EndItemNbr like(I_ItemNbr) value D Parent like(I_ItemNbr) value D SaveComponent s like(I_ItemNbr) C SaveKey klist C kfld Parent C kfld SaveComponent C Parent setll Struct C dow '1' C Parent reade Struct C if %eof() C return C endif C S_Component chain Item C if not %found() C return C endif C if I_ItClass = '00' C eval SRCH_Parent = EndItemNbr C eval SRCH_Component = I_ItemNbr C write SearchRec C return C endif C eval SaveComponent = S_Component C callp Chase (EndItemNbr: S_Component) C SaveKey setgt Struct C enddo P e
In order to understand the Chase routine, let’s follow what happens when the main loop reads item A19 from ENDITEM. The system activates the first instance of Chase, which I’ll call Chase 1, passing it the arguments A19, A19. Chase 1 begins reading the components of A19. The first one it finds is A12. Chase 1 activates another copy of itself–we’ll call it Chase 2–passing it the values A19, A12. Chase 2 reads the first component of A12–G81–and activates Chase 3 with the arguments A19, G81. Chase 3 reads the first component of G81, which is C32. Since C32 is a 00-class item, Chase 3 writes to ItemSearch and returns to Chase 2. Chase 2 reads the next component of A12, which is N45 and activates Chase 3 again. Since N45 has no components, control returns to Chase 2. Since A12 has no more components, control returns to Chase 1. Chase 1 reads the next component of A19, which is B17, and activates Chase 2. B17 has no components, so control returns to Chase 1. A19 has no more components, so the Chase routine is finished. The following list shows the activations of the Chase routine:
Level End item Parent Chase 1 A19 A19 Chase 2 A19 A12 Chase 3 A19 G81 Chase 3 A19 N45 Chase 2 A19 B17
Running this RPG program against the entire ENDITEM file produces the following output:
PARENT COMPONENT A10 C32 A11 C31 A12 C32 A15 C32 A17 C31 A19 C32 Z09 C32 Z09 C31
Efficient Database Retrieval
Recursive routines like Chase work well with small amounts of data, but give them a lot of work to do and they quickly run out of steam. The immense amount of I/O caused by repeatedly resetting the file pointer via the SETLL and SETGT operations slows the routine down considerably. I learned this lesson the hard way several years ago. Fortunately, I found a better alternative. It turns out that the database can find the components much more quickly. Here is the basic idea.
Begin by extracting the 00-class components at the first level. Then treat the non-00-class components of the first level as parents and look for their 00-class components. After that, take the non-00-class components at this second level, treat them as parents, and extract their 00-level components. Continue this process from level to level until there are no more non-00-class components to consider.
In this example, the first iteration would find the 00-class components of items A10, A11, A17, and Z09. The second iteration would find item A12, whose raw material is two levels down from the end item. The third iteration would find item A19, whose raw material is three levels from the end item. The last iteration finds raw material for items A15 and Z09, which have raw materials at the fourth level.
Here is the complete output:
PARENT COMPONENT A11 C31 A17 C31 Z09 C31 A10 C32 A12 C32 A19 C32 A15 C32 Z09 C32
If you compare this output to the output of the recursive routine, you will find the same records, but not in the same order. But that’s OK, because we’ve got plenty of ways to sort the result record set.
My version of this algorithm uses two work files. I use the first work file to hold the items that need to be exploded to the next structure level. In the second one I place the items that will have to be exploded at the next lower level. After each iteration, I move the contents of the second work file to the first work file. When the second file turns up empty, I know that there are no more levels of structure to look through.
Because it is so long, I do not include the source code of my routine here. To see how I implement this routine, take a look at ITEM02R in the downloadable code. I take some short cuts for performance reasons, but the basic idea is the same.
The other alternative to recursion is iteration. I’ve heard several times that someone has proven that anything that can be done with recursion can also be done with iteration, but no one has ever told me who proved it. Let’s look at an example.
Suppose you need to multiply all the elements of an array, similar to the way the XFOOT op code adds the elements of an array. You can use recursion to carry out this task because the product of all the elements of an array is the first array element times the product of the remaining elements of the array. See the recursive ArrProduct subprocedure in the following example:
H dftactgrp(*no) actgrp(*new) D ArrProduct pr 8f D Array 5i 0 dim(10) D BgnElem 5u 0 value D EndElem 5u 0 value D A s 5i 0 dim(10) D B s 5i 0 dim(10) D C s 5i 0 dim(10) D Product s 8f /free A(1) = 1; A(2) = 2; A(3) = 3; A(4) = 4; A(5) = 5; Product = ArrProduct(A:1:5); B = 2; Product = ArrProduct(B:1:%elem(B)); C = 2; C(8) = *zero; Product = ArrProduct(C:1:%elem(C)); *inlr = *on; /end-free P ArrProduct b D ArrProduct pi 8f D Array 5i 0 dim(10) D BgnElem 5u 0 value D EndElem 5u 0 value /free if Array(BgnElem) = *zero; return *zero; elseif BgnElem = EndElem; return Array(BgnElem); else; return Array(BgnElem) * ArrProduct(Array: BgnElem+1: EndElem); endif; /end-free P e
But why not use a loop instead?
P ArrProduct b D ArrProduct pi 8f D Array 5i 0 dim(10) D BgnElem 5u 0 value D EndElem 5u 0 value D D Ndx s 5u 0 D Product s 8f /free Product = 1; for Ndx = BgnElem to EndElem; Product *= Array (Ndx); endfor; return Product; /end-free P e
Here’s another possibility for a looping solution:
P ArrProduct b D ArrProduct pi 8f D Array 5i 0 dim(10) D BgnElem 5u 0 value D EndElem 5u 0 value D D Ndx s 5u 0 D Product s 8f /free Product = 1; Ndx = *zero; dow Product <> *zero and Ndx < EndElem; Ndx += 1; Product *= Array (Ndx); enddo; return Product; /end-free P e
A loop will perform faster and use less resources than a recursive routine. Recursion is unnecessary and an inferior technique for solving this problem.
So When Do I Recur?
Consider using recursion when iteration would require arrays to store values for each occurrence of the loop. The Chase routine required repositioning of the STRUCT file upon entry to Chase and when calling the next level of Chase. Repositioning the file pointer correctly each time was successful because each occurrence of Chase had its own Parent parameter and SaveComponent local variable. To have used a loop would have required two arrays to store these values and a pointer to keep up with the active element of each array.
Consider using recursion if very little I/O is involved. The example RPG program reads the entire ENDITEM file, which might have thousands of records. If the program were rewritten to accept an end item number as an input parameter and return a component as an output parameter, and if the program were called only from an interactive inquiry program, very little I/O would take place and the routine would be better than the file-joining database technique.
I like recursion. I have liked it since I was introduced to it some twenty-plus years ago in computer science class. Recursion is a good technique, but it has its place. I don’t use recursion if I have a better alternative.
“Recursive Calls Using Subprocedures,” by Kevin Vandever
Ted Holt is managing technical editor of Four Hundred Guru. His favorite recursion exercise, which he wrote in Pascal in 1983, was a program that found all possible ways to place eight queens on a chessboard in such a way that no two queens were attacking one another.
Click here to contact Ted Holt by e-mail. | <urn:uuid:827f7276-93be-4716-ae4c-c87a64f90dee> | CC-MAIN-2022-40 | https://www.itjungle.com/2005/03/09/fhg030905-story01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00105.warc.gz | en | 0.875041 | 3,189 | 3.5625 | 4 |
There are different takes on what veracity refers to. The overall consensus is that data veracity reflects the truthfulness of a data set and your level of confidence or trust in it. I’ll take this a step further and say that data veracity is your level of confidence/ trust in the data based on its provenance as well as the data processing method.
Think about this: when you get a box of chocolate which you haven’t tried before, how do you estimate how good it is? The first step is to look where it was made, by what shop or brand. You can mainly assess its quality by its provenance. As a second step, you probably also want to ensure that after you open the box, you won’t taint the chocolates somehow before you taste them.
Data veracity helps us better understand the risks associated with analysis and business decisions based on a particular big data set.
Looking at a data example, imagine you want to enrich your sales prospect information with employment data — where those customers work and what their job titles are. Not only this can provide you with additional contact data, but it can also help you create different market segments and do a better job of serving them.
LinkedIn collects lots of employment data, but unfortunately you can’t purchase it from them. So what can you do? You might go to another third-party provider of who claims to scrape LinkedIn data from search engine results (a legally grey area in my opinion, at least at the time this article is written; I’m not a legal expert so let’s just treat this as a theoretical example). Therefore, you might consider purchasing this LinkedIn employment data, but how do you gauge its veracity?
Well, here are the 9 questions to ask the data provider to help you better assess the data veracity:
- Who created the original data source?
- Who contributed to the data source?
- When was the data collected?
- Was the original data source enriched in any way?
- What methodology did they follow in collecting the data?
- What algorithm did they use to match records and what are the matching confidence levels?
- Were only certain industries or locations included in the data source?
- Has the information been edited or modified in any way?
- Did the creators summarize the information?
After answering all these questions you will also need to understand how, where, and when you will integrate this data with your own. What are the definitions, extract, transform, and load (ETL) procedures, and business rules which you will follow?
Answers to these questions are necessary to determine the veracity of this big data source. To expand on the employment data example, what if your customer base only included lawyers? Well, then you wouldn’t choose LinkedIn as your data source but rather go to the American and/or Canadian Bar Association. Why? Because the bar associations have a higher data veracity for this type of data than one that is self-reported.
Veracity is impacted by human bias and error, lack of data governance and data validation, software bugs which can lead to duplication and variability, volatility, and lack of security. We all wish for these to be addressed as we consider them important, at least in theory, but the reality is that not all data vendors monitor these variables enough to fully address them and follow the trifecta of data quality management. That’s probably why IBM Big Data & Analytics Hub estimates poor data costs the US economy $3.1 trillion every year.
Veracity is rarely achieved in big data due to its high volume, velocity, variety, variability, and overall complexity. In turn, we take solace in understanding that knowledge of data’s veracity helps us better understand the risks associated with analysis and business decisions based on a particular big data set. So, find out as much as possible about your data sources, big and small, to better gauge the veracity.
A similar version of the article was orginally published for ExagoBI | <urn:uuid:b92bbfa2-1428-4856-80e6-6f3139d03f53> | CC-MAIN-2022-40 | https://www.lightsondata.com/9-questions-data-veracity-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00105.warc.gz | en | 0.94694 | 840 | 2.765625 | 3 |
As individuals and businesses are increasingly dependent on technology for moving and storing data, the role of the CISO is becoming more vital to companies of all types on a global scale. The CISO, or Chief Information Security Officer, is tasked with developing and implementing cybersecurity. He/she works to predict, prevent, and react to cyber threats.
In order to do this, CISOs train multiple departments on digital security. This means that a CISO needs to have more than just technical knowledge. Some information technology workers can get by with minimal communicative abilities. This is not the case for the CISO.
Aside from being an executive-level role, the CISO’s position is unique in that it requires interpersonal skills in addition to technical ability. They have to be able to convey complex technological concepts and jargon in a way that employees outside of the IT department can understand. Additionally, teamwork and leadership skills are necessary as CISOs have to unite multiple aspects of the business into one secure, digital front.
If the business’s online databases are analogous to a battlefield, hackers could be considered the spies. They are usually undetectable and slip in right under your nose. Hacking through multiple layers of secure data can take time, such as going undercover to build reputation and gain intel. It is the CISO’s job to find and inhibit these spies.
Data breaches are like spies getting inside the city walls, castle, etc. Once broken in, confidential intel is readily available. The not-so-secure data can be used to hurt the business and its employees. Furthermore, the business might have to pay a settlement to remedy any customers affected by the attack.
About five years ago, Target famously suffered a cyber attack resulting in the largest settlement ever for a data breach. About 41 million customer payment card accounts were affected and over 60 million Target shoppers’ contact information was compromised. Target’s multistate settlement landed at a whopping $18.5 million. This is an unfortunate, yet pivotal event showcasing the importance of CISOs.
Because cyberattacks are augmented by technological advancement, CISOs are working harder than ever to diversify their skill sets and maintain the integrity of their company’s online data. Sophisticated technology is easily available to many people, thus leading to increases in cyber attacks. Organizations now understand the need to put cybersecurity at the forefront of their business plans, and they’re relying on CISOs to do it.
In order to find out, Varonis analyzed the LinkedIn profiles of CISOs at Fortune 100 companies. They found similarities between their endorsements and educational backgrounds to gauge what’s need to become a CISO. Jump to the infographic below for details.
They found that New York, Texas, and California are the three top states where Fortune 100 CISOs work. Unsurprisingly, more than half of CISO’s received a Bachelor of Science degree compared to other undergraduate degrees. Of those that received graduate degrees, Master of Business Administration and Master of Science are the most popular degrees.
The most common fields of study are management information systems (MIS), engineering, business, computer sciences, and economics. The most common endorsements were in, naturally, information security and security. Leadership and information security management are also commonly endorsed.
Looking to become the next leading CISO? Check out the infographic below featuring CISOs at top companies such as Walmart and Apple. Analyze their types of skills to see if any of them line up with your abilities or interests. Furthermore, Varonis provided useful advice from Deborah Wheeler, CISO of Delta Airlines, and Stephen Schmidt, CISO of Amazon. | <urn:uuid:665833bb-0142-4f47-a9b0-aa5569701034> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/what-does-it-take-to-become-a-leading-ciso/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00105.warc.gz | en | 0.95396 | 747 | 2.90625 | 3 |
Since landing on Mars Aug. 6, 2012, NASA’s Curiosity Mars rover has been exploring the planet’s surface and conducting science experiments on soil and rocks. One of the mission’s key milestones was reached this week when the rover’s specialized on-board laser was fired for the 100,000th time as it continues to explore the planet’s history.
The laser, called ChemCam, is shot each time at a rock, creating a little ball of plasma or debris, Roger Wiens, the principal investigator of the ChemCam team, told eWEEK. “It abrades some material off of the rock’s surface, like a little ball of flame,” said Wiens, who is a planetary scientist at the Los Alamos National Laboratory, where the laser was developed.
After each shot, special instruments on the rover capture the spectral signatures of the laser firing, which are used to identify the elements that make up the soil on Mars, he said. Photographs are also taken to document the laser firing and to build a history of the experiments.
The ChemCam laser marks the first time that scientists on Earth have been able to do this kind of research on Mars, said Wiens. A previous Phoenix lander sent to Mars had a laser, but it was aimed into the planet’s atmosphere and couldn’t collect information about the rocks on Mars.
Other Mars lander missions used a robotic arm to scoop up soil for analysis, but that limited data collection to materials that could be grabbed by the arm, said Wiens. “So it took more effort than just point and shoot,” like researchers are able to do with the laser. “This mission provides much more data collection.”
Another important advantage to using the laser is that it is helping scientists get below the dusty surface of Mars to see what’s really there in the rocks, he said. “Remember, that Mars is a dusty place and these rocks tend to be covered by dust. So the passive tests [using previous arm-mounted scoops] show what’s on the surface, while the laser shots get below the surface” and reveal more information about the composition of the materials.
“It gives us a window that we wouldn’t have otherwise by using this laser,” said Wiens. “I think it’s fair to say that we are piecing the data together like pieces in a puzzle.”
So far, the Curiosity, which celebrated its one-year anniversary on Mars in August, has delivered some incredible finds to scientists back on Earth, including the discovery of solid evidence that ancient Mars could have supported life, according to NASA.
A key discovery has been uncovered at the rover’s landing site, called Gale Crater, where a long- since-dried-up lake once stood, said Wiens. “It’s the first time that we have seen lake sediments on Mars. It’s still really mysterious. The water was here a very long time ago, but there are some pieces that are telling us a lot.”
For example, scientists have found real clay minerals in the materials in the area that were probably sediments in the bottom of a lake, like those found on Earth, he said. “The surprising thing is this appears to be a freshwater lake. Previous discoveries of water on Mars appeared to be very briny. So here’s what we think was a fairly big lake that had fresh water.”
Mars Rover Curiosity’s Rock-Blasting Laser Reaches Milestone
The ChemCam laser is operated half of the time by scientists at Los Alamos and half the time by scientists at the French national space agency, Centre National d’Etudes Spatiales (CNES) and at the French research agency, Centre National de la Recherche Scientifique (CNRS).
What’s been most surprising so far, said Wiens, is that on the laser’s very first firing on Aug. 19, 2012, it found evidence of hydration, or water, in the dust on Mars. “It’s still a mystery to us how this is present.” The water is in small quantities, however, making up only 1.5 to 3 percent of the content of the soil samples collected.
That discovery, though, creates some interesting implications for future space travelers to Mars, he said. “If you would have astronauts there, they could potentially collect large amounts of soil, heat it up and get water,” said Wiens.
Asked if the scheduled 30-month mission is so far going like he expected, Wiens said it’s even more special than he dreamed. “It’s one thing to be planning for it, and then it’s another thing to have all the data and to have all the secrets that Mars has been revealing from all these spectral images that we have been collecting.”
NASA’s Jet Propulsion Laboratory, which designed and built the project’s Curiosity rover, manages the Mars Science Laboratory Project for NASA’s Science Mission Directorate in Washington, D.C. Since landing, Curiosity has so far sent more than 190 gigabits of data back to Earth, and has sent back more than 36,700 full images and 35,000 thumbnail images, according to NASA.
In July, the Curiosity rover began a long-awaited, 5-mile journey across the terrain of the red planet to begin exploring a rocky area known as Mount Sharp 11 months after the rover arrived on the planet’s surface following a 354-million-mile, eight-month voyage from Earth.
The Mount Sharp destination, which is in the middle of Gale Crater, is important to scientists working on the mission because it exposes many layers where scientists anticipate finding evidence about how the ancient Martian environment changed and evolved, according to the JPL. The rover is expected to take up to a year to reach Mount Sharp, due to the care that must be used in crossing the unknown terrain.
At the end of June, it conducted a close-up investigation of a target sedimentary outcrop of rock called Shaler, according to NASA, then began heading away from Shaler on July 4. The vehicle travels very slowly, initially traveling 59 feet away from Shaler that day, then adding another 131-foot excursion away from the site on July 7.
In June, NASA released a spectacular 1.3 billion-pixel image of the surface of Mars, which was stitched together from almost 900 images taken by special cameras mounted on the Curiosity rover. The image can be explored using panning and scanning tools on NASA’s Website. The images used to create the massive photograph include some 850 frames taken using the telephoto camera of Curiosity’s Mast Camera instrument, supplemented with 21 frames from the Mastcam’s wider-angle camera and 25 black-and-white frames from the on-board Navigation Camera, according to NASA. | <urn:uuid:47fb0516-fcab-4ba7-a467-ca1de86dc2af> | CC-MAIN-2022-40 | https://www.eweek.com/cloud/mars-rover-curiosity-s-rock-blasting-laser-reaches-milestone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00105.warc.gz | en | 0.946729 | 1,471 | 3.75 | 4 |
Password change and password reset are terms that are often used interchangeably. However, they are not the same. A user will perform a password change when they remember their existing password, and a password reset when they have forgotten it.
The two use cases are inherently tied to an organization’s domain password policy which traditionally encompass password complexity, length, and change frequency requirements. With a sound policy in place, users will need to follow the composition requirements when changing or resetting their passwords.
But, what makes a password policy secure? There isn’t a shortage of regulatory and standard bodies that have weighed on this very topic. This article looks at what can be achieved using the native Active Directory (AD) Group Policy settings, including key capabilities that increase password security while balancing the user experience.
Active Directory password expiration
Password Expiration can be configured using the Maximum Password Age setting within the Default Domain Policy in the Group Policy Management Console. The setting is applied to all domain computers and users.
Maximum password age dictates the amount of days a password can be used before the user is forced to change it. The default value is 42 days but IT admins can adjust it, or set it to never expire, by setting the number of days to 0.
Windows password policy settings
Other Windows password policy settings include:
- Enforce password history determines the number of old/previously used passwords stored in AD to prevent users from using a previously used password. The default and maximum value is set to the previous 24 passwords.
- Minimum password age dictates how often a user can change their password following a password change. This prevents a user from reverting to a previously used password, circumventing the password history rule; by changing it 24 times in a row for example. The default value is set to 1 day.
- Minimum password length enforces the character length of the password.
- Password must meet complexity requirements utilized to ensure that the password cannot contain the user’s account name or display/full name, and must include three of the five-character types: upper-case letter, lower-case letters, numbers, special characters and Unicode.
- Store passwords using reversible encryption allows passwords to be stored in AD almost in plain-text, which is highly insecure, but sometimes needed to grant password access to certain applications.
These settings are meant to increase password security but can have a negative effect on end users. Complex passwords result in forgotten passwords as such anytime password complexity is introduced there will be an uptick in helpdesk password reset calls. According to Gartner research firm these can account for 30-40% of support costs.
To deflect password reset calls from the helpdesk, it is recommended that organizations implement passphrases which are outside of the scope of Active Directory. Passphrases are long passwords made up of unrelated words which are harder to crack but easier for users to remember. In fact, the National Institute of Standards and Technology (NIST) recommends using them with their 64-character maximum length requirement, however they do advise to eliminate password expiration as it can lead to users making poor password construction decisions.
Eliminating password expiry can leave an organization exposed indefinitely if an attacker has gotten hold of a user’s account. A better approach is to utilize length-based password aging. This combined with passphrases can ensure that users are incentivized to create longer stronger passwords by rewarding them with less frequent changes. Forced password changes are always going to cause users some disruption but the aforementioned features can alleviate some of the frustration. Another important consideration is to ensure that password rules are displayed dynamically to users as they are changing their passwords. If there is too much guess work involved users will revert to calling the helpdesk.
Active Directory password reset
Even with user-oriented features as noted in the section above, password reset calls to the helpdesk will still occur. Active Directory password resets are most commonly performed by using Active Directory Users and Computers. With just a few clicks a user’s password can be reset. This can be accomplished using other methods; the Active Administrator Center user interface or PowerShell are two examples.
A current gap within organizations is user identity verification – most rely on insecure methods, such as employee ID or security questions. In fact, password reset user verification is not mentioned in recommendations set forth by industry, or regulatory bodies, although it is a highly exploited attack vector. This is where proactive steps are necessary.
Given that password reset calls to the service desk take a significant percentage of the support call load in order to this cost and maximize security, organizations must look to a self-service password reset solution. The solution should support secure user verification methods, that go beyond security questions, although widely utilized answers to questions are cumbersome for users to recall. Security questions are also recognized as an insecure form of authentication due to social engineering. More secure forms of authentication should be considered especially ones that are already in use to eliminate the need for users to have to enroll in the system while extending the ROI of existing assets.
Active Directory password reset and change best practices
Ultimately, there isn’t a one-size fits all approach. IT departments need to balance the user experience while maximizing security. When setting a secure password policy, consider following these password change/password reset best practices:
- Turn on password expiration with length-based password aging to promote secure password construction behavior while reducing risk.
- Secure all password reset scenarios at the helpdesk and self-service with more secure forms of authentication.
- Display password rules dynamically to users changing or resetting their passwords. Frustrated users will contact the helpdesk.
You can start balancing the scale today with Specops uReset, a self-service password reset solution facilitating Active Directory password resets and changes. Through a graphic password policy rule display, the solution reduces errors and guess-work for end-users. Its robust multi-factor authentication engine includes various forms of user-verification that can extend authentication security to the helpdesk. | <urn:uuid:5aacbaf8-eba9-49a8-b6f6-8287fd4dc55a> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2019/12/12/active-directory-password-reset/?utm_source=mosaicsecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00105.warc.gz | en | 0.923407 | 1,240 | 2.734375 | 3 |
During a recent assessment I noticed that I was getting back (or, not getting back, as it were) a filtered response to
hping SYN scans. That’s normal enough for sites that drop incoming scan traffic, but the weird part was that if I used a standard connect scan, i.e. one that completes the three-way-handshake, I would get back a ton of open ports on the same host.
So if I did a “regular” scan, I’d send a SYN, get back a SYN-ACK, and then respond with an ACK. Fair enough, but if I sent just the SYN from
tcpdump, the host would not respond at all. Well, after a couple of minutes of head-scratching, logic revealed the path to the truth:
The two SYN packets are different.
If these two SYN packets weren’t different, then the target host would have no way of knowing that the SYN-scan’s SYN packet wasn’t legitimate, and as such would respond with a SYN-ACK as with the standard connect scan. In short, the only way for the host (or a filtering device in between) to react to one SYN differently than another is for the packet itself to be different.
Anatomy of a SYN
As it turns out, there’s a very tangible reason for the two packets being different. The SYN packets created by most port scanners out there are created via the raw socket interface, and they tend to have some fairly standard characteristics that stand out to both humans and computers (as we’ll see below).
Legitimate SYN packets, however, are created by the OS’s
connect() syscall. This is what happens when you want to use a regular application on your system, like a web browser or a mail client. This is the “regular” way of building a SYN packet, and as we’ll see in a moment, the packets made in this way are quite different than those made by scanner applications.
The raw socket technique can be thought of as “building” packets; it’s a method for modifying actual packet headers before they leave the machine. Common applications of this include spoofing the source address, changing checksums, and lighting up odd TCP flag combinations. The connect() method, however, is a packaged deal. When you call
connect(), you get pretty much the same kind of packet every time. You don’t get to mangle it, morph it, or corrupt it. What you get is what you get.
Given these differences, a number of products over the years have been coded to look at incoming SYN packets for the attributes associated with security scanners. They know that pretty much the only applications making these kinds of packets are illegitimate, so when they see them they immediately drop them.
The Differences Illustrated
Let’s actually take a look at the actual unique qualities of raw socket/scanner SYN packets and those created by
connect(). Below are three SYN packets from three different applications:
An Nmap SYN (-sS) Scan
14:53:09.185860 IP (tos 0x0, ttl 45, id 61607, offset 0, flags [none], proto: TCP (6), length: 44) source.60058 > dest.22: S, cksum 0x885a (correct), 877120720:877120720(0) win 2048 0x0000: 4500 002c f0a7 0000 2d06 8121 8115 0c09 E..,....-..!.... 0x0010: 8115 0dd0 ea9a 0016 3447 ccd0 0000 0000 ........4G...... 0x0020: 6002 0800 885a 0000 0204 05b4 `....Z......
An Nmap Connect (-sT) Scan
14:51:42.706802 IP (tos 0x0, ttl 64, id 61772, offset 0, flags [DF], proto: TCP (6), length: 60) source.36982 > dest.22: S, cksum 0x6e57 (correct), 113706876:113706876(0) win 5264 0x0000: 4500 003c f14c 4000 4006 2d6c 8115 0c09 E..<.L@.@.-l.... 0x0010: 8115 0dd0 9076 0016 06c7 077c 0000 0000 .....v.....|.... 0x0020: a002 1490 6e57 0000 0204 0524 0402 080a ....nW.....$.... 0x0030: 14aa f630 0000 0000 0103 0302 ...0........
A Legitimate SYN From Firefox
15:31:34.079416 IP (tos 0x0, ttl 64, id 20244, offset 0, flags [DF], proto: TCP (6), length: 60) source.35970 > dest.80: S, cksum 0x0ac1 (correct), 2647022145:2647022145(0) win 5840 0x0000: 4500 003c 4f14 4000 4006 7417 0afb 0257 E.. 0x0010: 4815 222a 8c82 0050 9dc6 5a41 0000 0000 H."*...P..ZA.... 0x0020: a002 16d0 0ac1 0000 0204 05b4 0402 080a ................ 0x0030: 14b4 1555 0000 0000 0103 0302 ...U........
Notice that the two latter SYN packets are very similar. They are the two created by the OS’s connect() syscall, while the first packet was created via a raw socket. Here are a few of the differences:
- The size of the connect() packets is 60 bytes, and only 44 for the raw socket packet.
- The TTL values for the connect() packets are 64, and 45 on the raw socket packet.
- The “don’t fragment” bit is set in the “legitimate”, connect() packets, but not in the other.
So the upshot is that you may actually get better scan results in many environments by doing “regular”, connect scans instead of SYN scans because of how the SYNs for each are constructed.
The next thing on my agenda is to use
nemesis to make a few custom SYN packets. I can build some that look just like the legitimate SYN packets — matching the size, TTL, and flag contents exactly. Then I can simply toggle each of them in sequence to figure out which value (or values) is considered illegitimate.
I’ll do that soon and post the results for anyone interested. :: | <urn:uuid:b2e4c238-a014-469e-b424-9b7676c3bd1a> | CC-MAIN-2022-40 | https://danielmiessler.com/study/synpackets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00105.warc.gz | en | 0.873592 | 1,489 | 2.53125 | 3 |
In a series of three articles we will zoom in on three potential applications of quantum computing in light of the COVID-19 pandemic. This third article will focus on the coronavirus evolution as a machine learning application. We will also zoom in on the overall conclusion of the potential of quantum computing in complex situations like COVID-19.
It is better to prevent than to cure. Preventing a pandemic or epidemic consists of identifying the whereabouts and risks of zoonoses and predicting the spread of an existing one. In doing so, organizations such as the One Health Institute at the School of Veterinary Medicine at the University of California, Davis are trying to gain a better understanding of the correlation between spillovers and animal-human interactions. Spillovers happen when a high pathogen prevalence of an animal (simply said, fluids containing infectious cells) encounters a potential host (human). Such a spillover can lead to human-to-human transferable diseases (such as COVID-19) but it doesn’t have to, as is the case with rabies. The One Health Institute’s PREDICT Project has been able to identify 1,200 viruses belonging to families that are known to have the potential to infect people and cause epidemics with 40 risk factors for those viruses to spill over and spread between humans. Identifying these viruses and risk factors required about 170,000 samples from animals and people in about 30 countries. As PREDICT expects that there are about 1.67 million yet-to-be-discovered viruses, both computing- and sample collecting challenges arise.
Another import factor in preventing (or managing) an epidemic is predicting the spread of an outbreak. In predicting the spread of an outbreak we have to deal with numerous factors, including human behavior, social conditions, environment, etc. Outcomes of such predictions give insight into topics such as the number of cases and deaths. Based on these numbers policy makers take (preventive) measures. However, combining this data and running different scenarios with multiple contexts to measure economic and non-economic effects of measure remains a challenge. Running predictions such as this is similar to the Netflix Problem.
Predicting the evolution of the coronavirus is a machine learning application. Similar to predicting the evolution of the stock market, or recommending the next movie on Netflix, it looks for meaningful patterns in the data. In stock market predictions, analysis of the covariance matrix is used to suggest new assets trades, or hedge portfolio items. In recommendation systems at Netflix, user behavior is categorized in different user groups by searching for meaningful features. Chances are that if person A has watched The Notebook, a recommendation of another romantic movie will be successful. In the case of the coronavirus, we have the same problem. With limited information, we are trying to find the features that best define the spread and transmission of the virus. Using these features, we want to predict the number of fatalities, new diseases, and the locations of break-out zones.
Such recommendation systems typically rely on principle component analysis (PCA). PCA is a machine learning technique that is used for determining the features in a data set that have the most predictive value. With PCA, you can summarize the data into a small set of describing features, the largest principal components (in the Netflix example this could be the genre romantic dramas), and then use those for prediction.
Calculating the principal components is a mathematical exercise that relies either on singular value decomposition (SVD) or eigenvalue decomposition. Computing the SVD has a time complexity of O(n^3), where n is the size of the covariance matrix. For small datasets, this is no problem, but for larger datasets it becomes intractable. Take, for example, the Netflix problem, where the datasets include tens of thousands of movies and millions of users. The resulting dataset includes hundreds of millions datapoints. Calculating the principle components in this case become impossible.
Instead, an approximation is made. Instead of determining all principle components, only the few most prominent components are determined. Using the most prominent principle components, an elementary prediction can be made that reflects the preferences of broad use groups. However, for specialized predictions, more principle components must be included. For example, a prediction of a movie that is not only in your favorite genre, but also stars your favorite actor might be more successful.
Quantum computers may be able to determine the principle components much more efficiently. Quantum principle component analysis (qPCE) efficiently exploits the inherent structure of the quantum state in a process called self-tomography to reveal information about the eigenvectors corresponding to large eigenvalues. This process can be used to encode information about a recommendation model in a quantum state, and the principle components can be found by mapping the principle components to the largest eigenvectors.
Often, the bottleneck for quantum algorithm is the transformation of classical data into quantum data. However, a qPCE algorithm does not require access to the full dataset. Instead, it efficiently sub-samples the dataset by performing function calls when needed. In a Grover-type operation, the dataset is sampled without the need for quantum ram, or expensive transformation of classical data to quantum data.
Altogether, qPCE may provide an exponential speed-up over any classical algorithm. Using qPCE, the internal structure of the data is revealed in a number of holistic variables with high predictive value. In Markovian systems, systems where the most recent events have most predictive value, we would want to recalculate the principle components regularly. In these systems qPCE, may have a significant impact. In the case of the stock market, this means that risk may be reduced by identifying relevant correlations. For Netflix, this means better suggestions for new series or movies.
The evolution of the coronavirus is a typical Markovian process. In the past few months, the news seemed to change the perspective on a daily basis. At the same time, accurate predictions of the number of fatalities or required IC beds were even more important. For these predictions, where regular and precise calculation of the principle component is required, qPCE may provide significant impact.
Looking at the impact that COVID-19 is having on society, economy, and healthcare, we can envision future use cases for the role of quantum computing in vaccine development, optimization solutions, and in identifying and managing the spread of viruses.
As the COVID-19 crisis has been stretching its endurance, the societal and financial effects are accumulating. For example, the US has seen its GDP plummet by 30 percent, and COVID-19 has contributed to driving an additional 12 million people below the extreme poverty threshold in 2020. This leaves us to presume that investing in any solution that has the probability of shortening the stretch of the next pandemic is worthwhile.
The challenge with aligning and allocating investments lies in the still (largely) unknown roadmap to clear use of quantum computing. Many use cases are still to be defined, and quantum’s real potential is expected only in ten years’ time. Nevertheless, there is value to be realized with quantum computing in a shorter period, as NISQ computers might already speed up computing. Furthermore, developments in the amount of- and stability of qubits on the one hand and efficiency of quantum algorithms, on the other hand, is developing at a rapid rate. Therefore, it is to be expected that clear business cases will be presented in three years. Because quantum computers are a natural fit for quantum chemistry, we may expect that a quantum advantage will first be realized in this domain.
Even though quantum computing’s added value is still a couple of years down the line, we should prepare for it. Without dismissing the great technological developments that have been made before and during the current pandemic, different skillsets will likely be required in applying quantum computing versus using classical computing. This stems from the fact that the very foundation of quantum computing is different, and therefore the layers building on that foundation, such as programming languages, will be different too. Implementation of middleware, infrastructure, and development tools will be complex and time-intensive, and necessary skills will be hard to find. Companies would be smart, facilitating early quantum enthusiasts, and promoting them to create awareness and explore quantum use cases. In the long run, we will need a variety of profiles, including quantum algorithm experts, developers, testers, hardware engineers, and business developers. If quantum computers are to become mainstream in ten years, students should enroll today.
Download TechnoVision and get a handy guide to pivot in these challenging times. | <urn:uuid:a0215a64-c5f3-4076-9a18-04a0e36d995c> | CC-MAIN-2022-40 | https://www.capgemini.com/fi-en/2022/01/can-quantum-technology-assist-in-the-next-covid-crisis-part-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00305.warc.gz | en | 0.931676 | 1,774 | 3.03125 | 3 |
Behavioral biometrics: What is it and how does it work?
Behavioral biometrics are increasing in popularity against a backdrop of increasingly sophisticated cyberattacks, as conventional authentication methods can fall short of stopping malicious actors.
The advent of multi-factor authentication (MFA) technologies, particularly those relying on biometrics, has presented hackers with a new challenge, but no system is ever entirely secure.
To provide persistent, adaptive authentication while also reducing end-user friction, some organizations have turned to a relatively new form of biometric authentication, one measuring users’ behavior patterns. Here’s an overview of this evolving technology.
What is behavioral biometrics?
It is a technology that measures unique patterns in human activity. The term is often juxtaposed to physical or physiological biometrics, which refer to analysis of human characteristics like iris patterns or fingerprints.
Behavioral biometric tools can identify people from patterns in activity like gait or keystroke dynamics.
These tools are used by financial institutions, businesses, governments and retailers for user authentication, rather than 1:N identification.
Unlike conventional authentication methods that work when a person’s data is collected, for example by touching a sensor, behavioral biometric systems can authenticate continuously.
How does behavioral biometrics work?
Behavioral biometrics compare an individual’s identifying pattern to past behavior, often providing continuous authentication throughout an active session or recording.
The behavior is sometimes captured by an existing device, like a smartphone or a laptop, and sometimes by a dedicated device such as a sensor array for measuring footfalls in gait recognition.
The biometric analysis returns a score that represents the probability that the person performing the actions is the one who set the baseline behavior for the system.
Dissimilarity between a customer’s behavior and the expected profile prompts a step up to additional layers of authentication that can include a fingerprint scan, taking a selfie or other requests.
The technology can be employed as part of employee or customer access control, preventing account takeover, detecting social-engineering scams and spotting money laundering. | <urn:uuid:30f7b0aa-22c4-414c-82dc-4e98d6e3e1c6> | CC-MAIN-2022-40 | https://www.biometricupdate.com/202208/behavioral-biometrics-what-is-it-and-how-does-it-work | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00305.warc.gz | en | 0.920791 | 438 | 2.84375 | 3 |
Big Data, But Not as we Know It
As astronomy equipment becomes ever more complex and harvests ever increasing amounts of information from our skies, companies are applying the theories of big data to try and answer one of humanity’s most profound questions – are we alone?
The amount of data collected is mind-boggling. Take the example of the Square Kilometre Array Project (SKA) – a $2 billion joint radio telescope development between Australia, South Africa and the UK. Upon the launch of Phase One in 2020 the project will collect a staggering 915 Petabytes of data per day (approximately 960 million Gigabytes) – that’s nearly 2 million years’ worth of playback on a standard MP3 Player. By the time the operation is fully functional in 2025 the SKA’s will produce ten times the global internet traffic of the entire of 2013.
While the search for alien life is not the project’s primary purpose, if life does exist the dishes have a good chance of finding it. The SKA’s telescopes will be so powerful and sensitive that the scientists involved predict that it will be capable of detecting an airport radar signal on an airport 100 light-years away.
Processing and Managing
Naturally, making sense of all this data requires some serious computing power, and the SKA central supercomputer doesn’t disappoint – possessing the processing power of 100 million high-spec PCs and able to perform an ExaFLOP (1018 million trillion) of operations per second.
Storing and archiving this data also requires some creative solutions. NASA’s Jet Propulsion Laboratory (JPL) is already working on developing open-source software that uses innovative cloud computing techniques to organise, manage and catalogue the huge amounts of daily data.
All this alien chasing also has real-world benefits. Today’s computers are still limited to PetaFLOPs which means the SKA’s computational technology will be applicable to large-scale computing everywhere, and the JPL team hope the complex data management tools they develop will help businesses archive and access their data with more ease.
Back on Earth
The vast amount of data collected has offered exciting opportunities to companies back on earth. A good example is Skytree, a 20-employee start-up in San Jose specialising in developing machine learning through the use of big data. The company has recently partnered with the SETI Institute and hopes to use complex algorithms to sift through the massive volumes of data to find patterns, relationships, anomalies and outliers. The techniques allow the Institute to look for signs of extra-terrestrial life in data that was previously discarded due to its size – indeed it’s possible that evidence of an extra-terrestrial life is already hiding in their archives.
It’s not just tech companies and scientific researchers who have access to the big data that’s been collected. SETI Live allows enthusiasts to use their own PCs to access live radio frequency signals SETI Institute’s Allen Telescope Array (ATA) – giving them the tools to identify and report any suspicious patterns that could be indicative of alien life.
Whether or not we will ever find extra-terrestrial is naturally debatable, but one thing is certain – if we do detect that life exists then big data will have played a massive part in the discovery.
What do you think? Is this a useful deployment of big data, or could the money and manpower be better spent elsewhere? Let us know in the comments below.
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:c36e2ad7-656f-4a99-a2d0-1652ac8187e1> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/03/big-data-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00305.warc.gz | en | 0.9254 | 813 | 3.421875 | 3 |
DARPA launches a program to interest grade school students in computer science careers.
The Defense Department’s research and development agency has started an initiative to increase the number of computer science graduates in the United States. The three-year, $14.2 million dollar program will use a variety of online tools and educational approaches to guide interested middle and high school students into pursuing computer science careers.
Managed by the Defense Advanced Research Projects Agency, the computer science-science, technology, engineering and mathematics (CS-STEM) program’s goal is to expand the talent pool of applicants available for jobs to support secure DOD networks and to accelerate computer science innovation.
The interest in increasing the number of CS graduates has national security implications, DARPA officials said. According to the agency, since 2002, the number of U.S. college graduates with computer science or related degrees has dropped by 58.5 percent. To reverse these numbers, CS-STEM will focus on creating interesting activities and opportunities for middle and high school students that will increase in complexity as they progress through their education.
CS-STEM is built around three components: a distributed online community, an online robotics academy, and an extracurricular online community for students. The first section, known as “Teach Ourselves,” developed by the University of Arizona, will be a distributed online community of students and teachers. This environment is intended to introduce students to the knowledge economy, which is a matrix where students can track their progress through a variety of subjects.
The second component is the Fostering Innovation through Robotics Exploration, (FIRE) online robotics academy. Developed by Carnegie Mellon University, it will allow students to sharpen their skills at solving complex problems by educating them with algorithmic thinking skills, engineering processes, math, and programming knowledge. DARPA officials said FIRE is intended to significantly improve the educational value of robotics competitions by developing online cognitive tutors and simulation tools designed to use robots and programming to teach major mathematics concepts.
The final part of CS-STEM is an extracurricular online community for middle and high school students. It will use ongoing, age-appropriate practice, activities and competitions, educational content, discussion boards, mentoring and role models to develop skills and foster interest in CS-STEM careers. Collaborative activities, puzzles, games, webisodes, workshops and other content will be used to attract students to the site on a daily basis.
NEXT STORY: Microsoft launches Windows Phone 7 | <urn:uuid:348528e1-88d0-49c2-ad3d-5db7f2ca2969> | CC-MAIN-2022-40 | https://gcn.com/2010/10/darpa-seeks-to-shape-young-minds/276373/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00305.warc.gz | en | 0.942826 | 508 | 3.015625 | 3 |
It’s no surprise that most people are frustrated by having to juggle passwords to access their ever-growing list of digital accounts. Once upon a time, it made sense to gate-keep access to digital services with usernames and passwords. However, now that the average user has around 150 online accounts (with this set to increase to 300 by 2022), it’s no wonder people are struggling to remember all their passwords.
To simplify things, some people reuse the same password for multiple accounts or choose simple, easy-to-guess passwords, while others outsource the task of monitoring their passwords to password management services. These habits can lead to users suffering major data breaches, but even when they do lose valuable data and money through breaches, most users still do not update their passwords. It’s clear that there is drastic need for change when it comes to managing passwords. Fortunately, the future has finally arrived, and it looks like it may be passwordless.
- Q&A: Getting ready for a passwordless future (opens in new tab)
Understanding passwords: Different forms of authentication
In order to understand the future of passwords, it’s important to first understand what passwords are. Primarily, a password is ‘something you know’ – a secret combination of letters, digits or symbols that allows a user access to an account. More technically, it can be described as a static knowledge-based authentication credential.
There are two other types of authentication credentials: possession and inherence. Possession is ‘something you have’; for example, a token or digital device that can receive a one-time password, or an ‘approve’ button delivered via push notification in a mobile app. Inherence is ‘something you are’ – a unique biological trait like a fingerprint or facial ID.
All authentication works on the simple premise that a user will be granted access to an account once they have proved their identity by providing a username and verifying it with at least one form of authentication. So, if we want the future to be passwordless, which IT professionals are appealing for, we will need to switch to other forms of authentication.
How to become passwordless: Biometrics and possession authentication
Fortunately, we already have the means to leave the old days of passwords behind us, many of which are already commonplace. For example, most people now use biometrics, such as fingerprint technology or facial recognition, to access their devices rather than a PIN. Companies such as Apple, Android and Samsung all offer these options on their mobile devices, and even financial institutions have begun to adapt accordingly.
Biometric authentication goes a long way towards improving security and user-experience. However, it’s possible to further enhance this level of security by combining multiple forms of authentication, one of which should ideally be communicated through an out-of-band channel. In this way, users can prove that they have more than one of the devices or channels linked to their identity, so that if one authentication channel is compromised by a malicious party, there is another form of authentication that can still provide a barrier to prevent bad actors gaining access.
- Embrace a passwordless approach to improve security (opens in new tab)
How a combination of biometrics and possession authentication can improve security
Using a combination of authentication methods is called two-factor authentication (2FA) or multifactor authentication (MFA). Multifactor authentication is the best way to secure an account – it makes an account 99.9 percent less likely to be compromised, according to Alex Weinert, Director of Identity Security at Microsoft. That said, over half of the 3,500 users in a Google survey did not know what 2FA and MFA were, while another report found that only 10 percent of Google accounts use 2FA. Therefore, to ensure a passwordless future, we need to educate people how to use these methods of authentication. In the meantime, however, there are other forces at play that may in fact speed-up adoption.
With biometric capabilities now available on most smartphones, we are starting to see this authentication option being applied in use cases other than accessing the phone itself: according to Deloitte’s UK-focused Mobile Consumer Survey 2019, nearly half of respondents with a smartphone now use fingerprint recognition, and of smartphone owners who use biometric readers, 48 percent have used this method of authentication to authorize payments (up from 35 percent in 2017) and 32 percent have used them to authorize money transfers to other people or organizations (up from 20 percent in 2017).
This behavior combined with industry leaders starting to align themselves to an international standard called FIDO (Fast ID Online) will have a major impact on security and authentication across industries. The objective of the FIDO international standard is to make passwords obsolete by replacing them with possession and biometric factors. The standard also uses encryption technology to ensure that users’ credentials cannot be accessed or stolen.
What this means in practice is that the barriers to implementing secure authentication through biometric or hardware devices are lessening significantly. Leading web browsers, smartphone platforms, software providers, and hardware providers are already releasing FIDO-certified hardware as well as certifying their platforms for FIDO authentication. Several tech giants – Google, Microsoft, and Apple – already support this standard. Because FIDO simplifies the risk and process of using biometrics or hardware for authentication, more online services and hardware (smartphones, laptops, desktops, etc.) will adopt this method of authentication. As a result, a higher percentage of the population will utilize and trust biometrics for all services that require authentication as it significantly eases the authentication process and increases security.
Biometrics, PIN keys, and second-factor devices are all FIDO-secure methods of authentication that have demonstrated decreased checkout abandonment and fraud incidence rates; however, authentication by biometric results in the least friction for consumers in the authentication process. Additionally, for payment institutions requiring extreme certainty and verification of user devices, the registration protocol outlined by FIDO includes an attestation procedure, which further minimizes fraud.
With a plethora of ratifying data pointing to a continuing upward trend in biometric usage, combined with the industry-wide use of FIDO, this could be the solution that will finally free us from the burden of endless passwords, opening the doors to a brighter, passwordless future.
- Why password policies are a waste of time and money for companies (opens in new tab)
Simon Armstrong, VP of products, Entersekt (opens in new tab) | <urn:uuid:bd801ee2-08e1-49d5-b3e6-74a6945ee49b> | CC-MAIN-2022-40 | https://www.itproportal.com/features/passwordless-authentication-the-future-is-here/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00305.warc.gz | en | 0.931125 | 1,358 | 2.875 | 3 |
Mass spectrometry is a word that many people haven’t heard before. Most people get intimidated by the word, but mass spectrometry isn’t as complicated as it sounds. This is an analytical tool that is commonly used in analytical labs. This technology allows scientists and researchers to understand molecules, which in turn tells us more about the world we live in. We put together a guide with the important things to know about mass spectrometry, so keep reading to learn more.
What Is Mass Spectrometry?
Analytical tools are extremely helpful because they help us understand things that we might otherwise not be able to understand or study. This is because the technology in the mass spectrometer allows scientists and researchers to learn about molecules. Mass spectrometry (MS) measures the mass-to-charge ratio of molecules in a sample. MS is used to detect impurities in the sample. It can also examine a purified protein. This technology consists of three parts, which are the ionization source, mass analyzer, and ion detection system. The column is another part that’s worth mentioning because that’s where the sample molecule is carried through.
What Is Mass Spectrometry Used For?
Mass Spectrometry is used in many different industries, as it provides information about the sample that’s being analyzed. Although many scientists and researchers use this technology, there are many other industries that do as well. For example, researchers use it in chemistry, physics, and biochemistry. Mass spectrometry is also used in the pharmaceutical industry to develop drugs and vaccines. This may come as a surprise, but mass spectrometry is also used in forensic science to analyze samples from documents or crime scenes. Lastly, it can also be utilized in environmental science.
Taking Care of a Mass Spectrometer
One of the most important things to know about mass spectrometry is that the columns must be kept clean. Understanding how to troubleshoot is important with a mass spectrometer. Keeping the instrument clean is extremely important, because otherwise, it cannot produce accurate results. It must be clean, as the contaminants could affect the machine’s functionality. Consider bringing solvent to clean the column before and after each use. This ensures the spectrometer is working properly. | <urn:uuid:4406fb4b-1828-4d40-8bd0-87d5fb9b80af> | CC-MAIN-2022-40 | https://coruzant.com/opinion/important-things-to-know-about-mass-spectrometry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00505.warc.gz | en | 0.936778 | 469 | 3.09375 | 3 |
IP addresses serve the purpose of uniquely identifying the individual network interface(s) of a host, locating it on the network, and thus permitting the routing of IP packets between hosts. For routing, IP addresses are present in fields of the packet header where they indicate source and destination of the packet.
IPv6 addressing is the successor to the Internet’s first addressing infrastructure, Internet Protocol version 4 (IPv4). In contrast to IPv4, which defined an IP address as a 32-bit number, IPv6 addressing has a size of 128 bits, vastly expanding the addressing capability of the Internet Protocol.
IPv6 addressing is classified by the primary addressing and routing methodologies common in networking as follows
IPv6 addressing does not implement broadcast addressing, the use of the all-nodes group is not recommended, and most IPv6 addressing protocols use a dedicated link-local multicast group to avoid disturbing every interface in the network. | <urn:uuid:66d6795d-be66-4ba3-afb4-21f4f7289cbe> | CC-MAIN-2022-40 | https://cyrusone.com/resources/tools/ipv6-addressing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00505.warc.gz | en | 0.901475 | 192 | 3.578125 | 4 |
In recent years, identity fraud has been researched and quantified possibly more than any other societal issue. However, all measurements of identity fraud have, to date, relied on victim accounts, whether recorded by Gartner or Javelin in consumer surveys or as reported to the Federal Trade Commission.
Victim reports, however, account for only a small portion of all identity fraud — 10 percent to 15 percent, based on our research at ID Analytics. That type of identity fraud, where a consumer’s personal information is stolen and used to perpetrate financial fraud, is what we call “true name identity theft.”
While “true name identity theft” is a very real problem, the reality is that the majority of identity fraud is never reported by consumers because consumers are never directly victimized.
This classification is what we call “synthetic identity fraud,” and it accounts for 80 to 85 percent of all identity fraud. Synthetic identity fraud is when a fraudster creates, or synthesizes an identity using a combination of real and false identity elements — say, a false name, my Social Security number, and the address and mobile phone number of the fraudster himself.
With synthetic identity fraud, there’s no consumer victim to say “that’s not me.” Meanwhile, the fraudster continues to open new accounts and have credit cards, mobile phones and other goods sent to his address. As you can imagine, this type of identity fraud is much harder to detect.
Armed with this knowledge based on prior research into true-name vs. synthetic identity fraud, ID Analytics set out to measure identity fraud rates by geography, all the way down to the five-digit zip code level.
This research, which was the first ever to be based on actual and attempted frauds rather than on consumer victim reports, revealed results that were both expected and surprising.
First, some definitions: What do we mean by “actual and attempted frauds?” The data in this research was drawn from ID Analytics ID Network, an identity fraud prevention system that gives visibility into fraudulent patterns of behavior using sophisticated analytics.
The ID Network comprises three billion identity elements — including names, addresses, Social Security numbers and phone numbers — which are contributed in real time, by organizations spanning multiple industries, for the sole purpose of preventing identity fraud.
The data is drawn from a variety of sources including applications for credit, debit and new accounts. Fraudulent applications are tagged as such. Applications in this analysis were submitted to credit granters from January 2005 through June 2006.
In our data study, fraud rates were calculated based on the total number of reported identity frauds divided by the number of applications; as a result, the population density was scaled out, enabling comparisons among areas with differing populations.
So what did we learn about those populations? Not surprisingly, identity fraud rates were highest in major metropolitan areas, namely New York, Detroit and Los Angeles. What was surprising however was identity fraud is also high in some less populated cities like Little Rock, Ark. and Springfield, Ill.
Top Ten Areas
When examined at a 3-digit zip code level, the ten metropolitan areas with the highest identity fraud rates are:
- 1. New York, N.Y.
- 2. Detroit, Mich.
- 3. Los Angeles, Calif.
- 4. Little Rock, Ark.
- 5. Greenville, Miss.
- 6. Atlanta, Ga.
- 7. Phoenix, Ariz.
- 8. Portland, Ore.
- 9. Dallas, Texas
- 10. Springfield, Ill.
Identity fraud in cities is relatively easy to explain. When large populations of people live together in close quarters, it’s easier to access and steal personal information, usually out of mailboxes, or at busy places of work, from file cabinets, databases and the like.
It might seem harder to explain high rates of identity fraud in less populated areas. We don’t think so. This is where our previous research on synthetic identity fraud became especially relevant.
The addresses on the fraudulent applications in our recent research may have belonged to the victims of the identity fraud if the perpetrator were to use the complete and accurate identity information of the victim.
However, we believe the majority of the addresses were associated with the perpetrators of the fraud using synthetic identities of real and false identity elements. While the applications included real addresses for the purposes of verification and receipt of credit cards and goods, the addresses may have been residences, places of work or any other physical location where fraudsters could conveniently receive the tools and plunder of their trade.
In other words, a hardworking fraudster sitting comfortably in Little Rock could well be committing hundreds of frauds using synthetic identities. Because the citizens of Little Rock aren’t being defrauded, no one is filing reports with the police.
The victims in this case are the businesses themselves — the companies issuing the fraudster credit cards, mobile phones, and the companies selling merchandise that he purchases with his new credit. These transactions are typically not face-to-face for good reason. They’re typically online. They’re often e-commerce transactions.
The other explanation for higher identity fraud rates in these lower populated areas is that multiple identity criminals are actually operating in an organized manner. ID Analytics analyzed the data down to the 5-digit zip code level, giving precise visibility into concentrations of identity fraud, which may indicate fraud rings or some other criminal activity.
While some of the zip codes predictably included places like Queens, N.Y., and the south side of Chicago, the riskiest zip codes also included Merlin, Ore.; Lilesville, N.C.; and South Hill, Va.
It’s hard to say whether the overall rate of identity fraud is going up or down. Javelin Strategy and Research recently announced that identity fraud is dropping. Meanwhile, according to Gartner’s survey of 5,000 Americans, identity theft has increased more than 50 percent since 2003.
Regardless, identity fraud continues to have an impact on e-commerce business, especially given the “faceless” nature of the transactions, and especially given the high rates of synthetic identity fraud. The only way businesses can protect themselves is by taking steps to determine if customers are who they say they are before granting credit or opening new accounts.
Credit scores are no indication of whether customers are who they claim to be. Identity scores are. Fortunately, plenty of technologies are available today to help determine to a high degree of accuracy with whom you are doing business. Whether that person is sitting in New York or Springfield, Ill., shouldn’t matter.
Stephen Coggeshall, PhD, is chief technology officer for ID Analytics, an identity risk management company based in San Diego, Calif. | <urn:uuid:7fe77aec-cb32-4d2e-966b-f49ec34ae9e0> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/id-theft-knows-no-boundaries-56864.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00505.warc.gz | en | 0.940649 | 1,420 | 2.765625 | 3 |
Delivery robots – just a few years ago, the stuff of pure science fiction – are now very much a reality and quickly becoming a part of everyday life for many of us. In fact, I will usually come across five or six when I go for an evening jog in my hometown of Milton Keynes, England!
These particular ones belong to Starship, a company that deployed its first autonomous delivery bots just three years ago and now operates a fleet of over a thousand, in several locations in the UK, USA, and very soon in mainland Europe too.
I spoke to their CEO, Alastair Westgarth, who told me that his robots had traveled a total of 3.6 million kilometers to make 2 million deliveries. Powered by machine learning algorithms, they are constantly getting smarter, meaning they become more efficient as well as safer.
Of these journeys, the vast majority are completed fully autonomously; however, human operators are always ready to step in when needed. Westgarth told me, "Safety is always our number one priority, so if a robot encounters something unusual, it will stop and send an alert to our remote operators.
"Most of the time, they will say it's okay, proceed – and release the robot. That's 90% of the interactions … say there's a new crossing that's highly complex. But 99% of the time, they are driving completely autonomously."
This increasingly confident performance is something I can vouch for myself, having witnessed their evolution with my own eyes. When they first appeared on the streets near my home, they would come to a halt any time a person moved close to them. Now, it is evident that they’ve learned to navigate by themselves, and I will see them just make small corrections to their course in order to avoid moving into my path.
The robots are packed with sensors, including cameras with machine vision, radar, and ultrasonic sensors that detect solid objects like curbs and walls. They cross 80,000 roads every day, and initially, these were all carried out by human operators. However, as the robots learned more about their environments, today almost everyone is carried out autonomously.
“We’ve driven 3.6 million kilometres so that’s a significant amount of ground we’ve covered, and we do it 24/7, in the dark, in the snow, in heavy rain … when we first encountered snow it was something the robots weren’t familiar with, it produced different images from the cameras, and the sensors reacted differently, so we had to train our systems to deal with that environment. They are constantly learning … our autonomy today is orders of magnitude bigger.”
When we talk about robots, one topic we can’t afford to ignore is the potential impact on human jobs. Delivery robots clearly pose a threat to human employment, and while a common reaction to this is to suggest that humans could be doing better things with their brains and bodies than the relatively menial task of making deliveries, nevertheless these are jobs that allow people to earn a living and support families.
Westgarth says that he is confident that enterprises like Starship will make more job openings available to human beings than it will take away.
“We believe in our heart of hearts that as we bring in more technology that makes the experience more efficient and more valuable … that we add to the employment base. We're migrating employment, and … at the end of the day, we hope that the number of jobs we create offsets the number of jobs that may be lost by autonomous delivery. If we look at history, as efficiency and autonomy come in, the economy grows, and more jobs are created - an obvious example is that there are no stagecoach drivers now, but there are car drivers.
“At the end of the day, there will be more people taking care of our robots, more people providing services to the merchants we deliver for, people programming our software, developing our apps on phones and tablets. So employment will change, but we believe it will go up," Westgarth says.
Starship’s robots operate at what is known as the “last mile” of the delivery process – in reality, the last one to three miles. This involves delivering goods from supermarkets, grocery stores, takeaway food venues, and restaurants. Other elements of the delivery and logistics industry are concerned with longer distance domestic and international delivery, and here the impact of automation will be felt soon too – with autonomous shipping, delivery vans, and airborne drones all on the horizon. Of course, I took the opportunity to ask Westgarth where he sees Starship fitting into this and how society will adapt to the broader challenges of autonomous delivery, going forward.
“We have no intention of delivering Sprinter van levels … over ten or twenty kilometers", he tells me. "There's a need for that and ways to provide it, and today all of that is still manual. But probably, it will become more autonomous in the future. [Delivery is] multi-modal … its very difficult to scale these things economically with purely human-driven options … so there’re challenges there. [But] it's much easier to scale the last-mile delivery, with an autonomous approach.
“We’re trying to deliver goods in as efficient and effective a manner – where we're adding to the value chain instead of taking away. I think it's a very bright future, we'll see autonomous vehicles delivering on the road, we'll see more on the sidewalk, and we'll see other options like drones as well. We want to be part of that future."
You can watch my interview with Alastair Westgarth, CEO of Starship, in full here, where we also cover issues including skills and qualities that workers will need to thrive in an autonomous future. | <urn:uuid:3b5877b0-15ef-447a-8188-9c3bc4a04bef> | CC-MAIN-2022-40 | https://bernardmarr.com/the-future-of-delivery-robots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00505.warc.gz | en | 0.967498 | 1,200 | 2.59375 | 3 |
Quantum Teleportation Achieved With 90% Accuracy Over 44km Distance
(ScienceAlert) Scientists are edging closer to making a super-secure, super-fast quantum internet possible: they’ve now been able to ‘teleport’ high-fidelity quantum information over a total distance of 44 kilometres (27 miles).
Both data fidelity and transfer distance are crucial when it comes to building a real, working quantum internet, and making progress in either of these areas is cause for celebration for those building our next-generation communications network. In this case the team achieved a greater than 90 percent fidelity (data accuracy) level with its quantum information, as well as sending it across extensive fibre optic networks similar to those that form the backbone of our existing internet.
“We’re thrilled by these results,” says physicist Panagiotis Spentzouris, from the Fermilab particle physics and accelerator laboratory based at the California Institute of Technology (Caltech).
It’s never before been demonstrated to work over such a long distance with such accuracy, and it brings a city-sized quantum network closer to reality – even though there are still years of work ahead to make that possible. | <urn:uuid:f3b8cd74-5910-4490-b975-e8273e620ce7> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-teleportation-achieved-with-90-accuracy-over-44km-distance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00505.warc.gz | en | 0.905159 | 252 | 2.546875 | 3 |
There are literally hundreds of ways to secure & solidify a Nginx server after an attack.
But, what does it REALLY need to be cleaned and secure? What are the essential changes you have to make to feel secure again?
To answer that question, we’ll have to investigate what the most widely recognized security issues that affect a Nginx server. These are the security issues most often seen:
- Malware transfer and spamming
- Botnet assaults
- Brute Force Attacks
- Basic vulnerabilities in server programming (SSL, Nginx, and so forth)
- Spam from Comments
As should be obvious, you should probably confront a site vulnerability or hack and treat it as a high-seriousness event.
That is the reason why we suggest securing your Nginx server, especially if your site was hacked recently.
If possible, give more thought about the changes you can make to a framework that will help to avoid site vulnerabilities. Right away, here are the best tips you can take to secure Nginx server after a hack.
1 – Secure the application that was attacked. Audit and fix errors in your web application
Being proactive is always better than being reactive. That is the reason we suggest that after a hack you fix all web applications on your Nginx installation when a vulnerability is uncovered.
Especially if your NGinx installation has been hacked, you should continuously scan all updates to the web apps installed and used on your server, and compare those versions installed with the latest releases on the net, and apply patches if the version is found to be old or is not insecure.
2 – Change Nginx settings for security
Nginx, of course, is meant to execute a task or application and is not inherently secure.
The most common settings you can change are:
- Disable all requests with the exception of GET, POST, and HEAD
- Put a restriction on the request & buffer size to limit buffer overflow attacks.
- Disable all NGinx features & modules that you don’t need or don’t use.
- Limit the number of connections from a single IP address to 10.
- Remove Nginx headers & PHP header info, so hackers can’t get info about the server.
- Enable security headers that will block common attacks, such as “X-XSS-Protection.”
You can add extra settings depending on how your application was hacked, or what vulnerability was used to exploit Nginx.
3 – Use a secure SSL certificate with a secure cypher
A great way to recover after a hack is by installing an SSL. One option would be OpenSSL, and while it is free, it has gotten a great deal of unfavorable criticism in the ongoing years in light of wave after wave of security vulnerabilities.
A considerable portion of these issues happened on the grounds that individuals continued utilizing old and powerless Ciphers and Protocols. Which is the reason you should make it a point to audit the SSL Cipher and Protocol list that are being used on your Nginx installation at least once every month.
You should make it a point to expel old and insecure Ciphers/Protocols, for example, RC4, SSLv2, SSLv3, and so on and use just those that are turned out to be stable. Furthermore, we suggest that HSTS (HTTP Strict Transport Security ) is used in eCommerce settings to ensure phishing is minimized.
Finally, we recommend that you set up auto-renewal for all authentications introduced in the server with the goal that the server stay secure in the event that you forget to renew your SSL. Forgetting to update your credit card should never be a reason for the security to lapse on your website.
4 – Install server patch fixes as soon as they are available
You have to set up a consistent monitoring against new sorts of assaults coming into the great beyond. You also have to be vigilant, and if possible, you should let security specialists monitor your servers 24/7 for security issues, hacks, & pending security updates.
On the off chance that another weakness is uncovered that doesn’t have an official fix yet, security & programming experts can set up a “hot-fix” for the vulnerability so it can’t be abused until the point when an official fix turns out.
5 – Stop basic attacks with a Web Application Firewall
The web is hit with many new web application vulnerabilities each day. Be that as it may, every one of these vulnerabilities utilizes well-known assault strategies that have seen before, for example, Cross-site Scripting, Remote File Inclusion, Path Traversal, SQL Injection, and so forth.
What’s more, especially if your Nginx installation has been hacked recently, you need to look into Web Application Firewalls. (WAFs) identify all sort of malicious conduct incoming to your server so it can be blocked. You can expect great outcomes in utilizing open source firewalls, for example, Mod Security and NAXSI.
Be that as it may, the vital thing to remember is, your firewall is only as good as the settings and configurations it has. We suggest that depending on how your site was hacked, that you or your team compose your very own custom principles in the event that we feel that none of the guidelines satisfactorily secure our client servers against another risk.
Nowadays, a wide range of server hacks & malware infections are completed utilizing automated devices. What’s more, these attacks depend on constant “brute force” techniques that experiment with different passwords or try to break a login screen.
Such examples can be effectively identified by any well-designed firewall system. If you are using Nginx, you can use special firewalls and set up additional settings, for example, CSF to ensure a wide range of malicious conduct, for example, port scanning, brute force, and others. are detected and blocked before they are even passed on to the Nginx service
There are a hundred different ways to solidify a Nginx server after an attack, yet what are the essential ones? Today we’ve taken a gander at the best security dangers tips for fixing a hacked Nginx server, and what we do to harden it from hackers and malware after an attack.
Get the latest content on web security
in your inbox each week. | <urn:uuid:9817d2e3-1640-4197-896b-6eeb009d4008> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/articles/how-to-clean-a-hacked-installation-of-nginx/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00705.warc.gz | en | 0.924602 | 1,326 | 2.546875 | 3 |
The pandemic has redefined healthcare delivery, accelerating digital transformation initiatives from years to months and, in some cases, even weeks. While going digital has many benefits and efficiencies, it also introduces many new challenges and increases other known risks related to privacy, security, and safety.
Healthcare records: the gift that keeps on giving
Healthcare data is a particularly attractive target for hackers, by one estimate worth 10 to 40 times more than a credit card on the black market. Medical identity theft is often not immediately identified by the patient or provider, giving cyber criminals more time to use stolen credentials. Moreover, the information in medical records can be used for medical billing fraud, identity theft and other big-money scams over time. By comparison, the opportunistic use of stolen credit cards is shorter lived, as victims and financial institutions quickly cancel accounts once fraud is detected.
Then there’s ransomware, which gives bad actors an immediate payoff without the need to monetize any stolen data. In late October, CISA, FBI, and the Department of Health and Human Services (HHS) advised U.S. hospitals and healthcare providers of an increased and imminent cybercrime threat related to ransomware. Among other victims, the University of California San Francisco School of Medicine disclosed in June that it had paid $1.14M to decrypt files after a ransomware attack.
Keeping pace with evolving regulations
Another challenge with digital medicine is maintaining data confidentiality with ever more stringent data privacy regulations, particularly as it relates to the General Data Protection Regulation in Europe or protected health information or Personal Health Information (PHI) as defined by HIPAA in the U.S., as well as myriad state laws. With digital healthcare delivery, the attack surface grows exponentially to include medical devices, the device firmware, operating software and application, instructions these devices receive, and the patient data they collect. In the U.S., the FDA works closely with DHS and other federal agencies, the private sector, and device manufacturers to continually improve the cybersecurity of the healthcare delivery network infrastructure.
With all the bells and whistles, medical IoT brings vulnerabilities
One of biggest changes as healthcare goes digital is the growing reliance on increasingly connected smart devices, or Internet of Things (IoT) devices, for patient examination through diagnosis, treatment, and monitoring. There are implications for the security of these devices, the data they collect, their connections, and ultimately the safety of patients that are recipients of the services they deliver – be it the right doses for a medication such as an insulin pump, or the precision of a remotely controlled surgical robot. This is where a strong root of trust becomes paramount, particularly as organizations propagate trust across different areas in the healthcare field – from medical records to medical devices. But while IT leaders are quickly moving to adopt solutions to support a stronger root of trust through identity, authentication and encryption, it’s a struggle to identify where the sensitive data resides across the enterprise according to Entrust’s 2020 Global PKI and IoT Trends Study.From protecting the identities and data of hospital employees and patients, to safeguarding medical records and research, to securing medical devices and machines, healthcare digital security matters more than ever. Check out this on-demand webinar to learn more about:
- Building a strong root of trust
- Digital identity verification
- Passwordless access
- Secure prescription signing
- Secure communications | <urn:uuid:69adc4af-3a6d-4d3b-981a-698ac5c91e82> | CC-MAIN-2022-40 | https://www.entrust.com/fr/blog/2020/12/top-three-security-challenges-with-rapid-digital-transformation-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00705.warc.gz | en | 0.931257 | 686 | 2.578125 | 3 |
New Training: Author PowerShell Modules
In this 8-video skill, CBT Nuggets trainer Trevor Sullivan teaches you how to create a PowerShell script module. Gain an understanding of module manifests. Learn how to structure your PowerShell module as it grows and how to debug a function in your module. Watch this new PowerShell training.
Watch the full course: Advanced PowerShell Automation
This training includes:
44 minutes of training
You’ll learn these topics in this skill:
Understanding PowerShell Module Structure
Building Your First PowerShell Module
Understanding PowerShell Module Manifests
Splitting Your PowerShell Module Into Pieces
Publishing Your Module to the Gallery
Debugging Your PowerShell Module
Call to Action
What is a PowerShell Module, and How Can It Help PowerShell Functionality?
PowerShell provides users with a very high level of automation and configuration through the object-based scripting language built on top of the .NET framework. One of the greatest values that are provided in the PowerShell offers is the ability to build PowerShell Modules. A PowerShell Module is a user-defined set of PowerShell cmdlets that together build a more complex process of actions on PowerShell objects in an executable format.
PowerShell Modules allow users to save and share complex interdependent PowerShell actions allowing other users to use the module or to keep the module as a standard set of actions to be applied for specific use cases.
PowerShell Modules are composed of four basic components, the code file, additional assemblies, a manifest file and a directory. These four components together allow for PowerShell Modules to define a set of actions, source all the dependencies and reference the directory holding all of the PowerShell cmdlets and dependencies. | <urn:uuid:bbd1c300-f89e-4640-9a85-7d3171cb6fca> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-author-powershell-modules | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00705.warc.gz | en | 0.741108 | 347 | 2.546875 | 3 |
AI is a term we have heard with greater and greater frequency over the past several years. In all actuality, the term "artificial intelligence" has been around for a long time. John McCarthy came up with the name "Artificial Intelligence" in 1955. The most prominent example of living AI right now, surprisingly enough, are chatbots.
Artificial intelligence a computer system that can perform complex tasks that would otherwise require human minds. Tasks such as visual perception, speech recognition, decision-making, and translation between languages. There are 3 types of AI, though 2 of those types only exist in theory. You won’t find living examples of this theoretical artificial intelligence, or super chatbots, yet. Let’s talk about these different types of AI, including the theoretical, but sometimes frightening to think about, types.
First, there is AGI or "Artificial General Intelligence." This type of AI would be similar to the human mind, which is why it doesn’t exist yet. That is because we still don’t have a full grasp of how the human brain works, so to mimic it in AI seems like a far-off idea for the time being, though conceptually speaking, totally possible. When you think of AGI, or "Artificial General Intelligence" you could imagine the ‘I-Robot’ character Sonny, a lifelike humanoid bot.
Second, there is the highly theoretical and almost scary ASI or "Artificial Super Intelligence." ASI would be capable of not only functioning at the human intelligence level but perhaps even surpass it. It would be more intelligent and able to do everything a human does better because it wouldn’t have trauma or bias to deal with.
....NOTE - content continues below this message
DON'T MISS THIS!
We invite you and your colleagues to join us LIVE as we take the highest rated industry conference to the next level! the 2022 World's Best! - join us and the elite in the industry at the 17th annual NEXT GENERATION Contact Center & Customer Engagement GLOBAL Best Practices Conference!
>>>>> FIND OUT MORE: HERE
Thirdly, there is ANI or "Artificial Narrow Intelligence." This is the type of AI we are used to seeing and is in existence today. An example of ANI that nearly everyone recognizes would be Siri or Alexa. These two examples are just super-intelligent "chatbots" that are programmed to know a lot, and I mean A LOT, of information.
A programmer must teach this type of bot all this information though, hence the term "narrow" being used to describe it. This type of AI is only as intelligent as it is programmed to be because it requires the human behind it to predict possible questions and know how to answer. Also, knowing which types of questions and answers to program can make or break a chatbot.
DiRAD has developed incredible business "chatbots" that we refer to as IVA- Intelligent Virtual Assistants. They are like Siri or Alexa, except they are programmed for a business’s specific needs. Instead of answering "What is the weather going to be like today?" (a common Alexa question) a company could have IVA, built to answer their most frequently asked questions and MORE.
It wouldn’t just answer "What are your hours?" or "Are you open?", though. It could go as far as to schedule meetings for you. It could then input that meeting into your company calendar and be intelligent enough not to double-book you. A customer could ask if you have a certain product on hand. The IVA could check through your stock database to answer that question- all without human intervention.
IVA and chatbots are far from taking over humanity, conceptually speaking. It’s fun to imagine what the future may hold for this incredible technology, though.
About DiRAD Technologies:
DiRAD implements technology solutions including call center software, outsourced customer care, Interactive Voice Response (IVR), mass messaging, and AI-based, omnichannel chatbot & voicebot technology.
Published: Friday, July 23, 2021
Co-Browsing is the practice of web-browsing where two or more people are navigating through a website on the internet. Software designed to allow Co-Browsing focuses on providing a smooth experience as two or more users use their devices to browse your website. In other words, your customer can permit the agent to have partial access to his/ her screen in real-time. | <urn:uuid:f767efca-7dfd-4b2a-909f-67cd7c877d54> | CC-MAIN-2022-40 | https://www.contactcenterworld.com/view/contact-center-article/ai-business-are-chatbots-taking-over.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00705.warc.gz | en | 0.954941 | 960 | 3.25 | 3 |
To those who think Th!$ i$ Th3 b3$t w4y t0 wr!t3 a P@$$w0rd, think again. The National Institute of Standards and Technology (NIST) has recently published new guidelines on password security, revising the old rules and deeming them counterproductive to personal security purposes.
Paul Grassi, NIST Senior Standards and Technology Adviser, said in an interview with NPR, “The traditional guidance is actually producing passwords that are easy for bad guys and hard for legitimate users.”
Previously, the NIST password security guidelines suggested a combination of lower- and uppercase letters, numbers, and special characters to constitute a strong password. The author of said password primer published in 2003, Bill Burr, recently told The Wall Street Journal that he now disagrees with his original recommendation.
The update on the password guidelines contained within NIST Special Publication 800-63B (Digital Entity Guidelines) discusses the increased security risk of highly complex passwords. “Highly complex memorized secrets introduce a new potential vulnerability: they are less likely to be memorable, and it is more likely that they will be written down or stored electronically in an unsafe manner. While these practices are not necessarily vulnerable, statistically some methods of recording such secrets will be. This is an additional motivation not to require excessively long or complex memorized secrets.”
The guidelines no longer propose a mix of letters, numbers, and special characters. Instead, the publication suggests long phrases in English, typed entirely in lowercase letters.
Additionally, previous password security guidelines also indicated a change in password every 90 days, but the new rules seem to revoke this practice, as Engadget reports that NIST is recommending a password change only in the event of a security breach. The new guide also mentions that passwords need not expire for them to continue to maintain security.
MetroStar’s Director of Cybersecurity Clay Calvert analyzed the ratio of passphrases (a string of typical English words written in lowercase) and passwords (a combination of characters, including letters, numbers, and symbols) to compare their strength and determine the best approach to password security.
In the table below, the first column contains the number of words or letters. The second shows the number of possible combinations for 1 through 20 words in a passphrase, while the third shows the same for characters.
Calvert suggests that based on mathematical computations, a 12-character password is equivalent to the strength of a passphrase with at least five words. Similarly, to achieve the strength of a 20-character password, one would need an eight-word passphrase.
"In short, just using passphrases alone could be a fine alternative to using passwords, but there will be a lot more typing every time authentication is needed,” Calvert notes. “I would recommend using at least five words in a passphrase. Including non-alphanumeric characters—that are easy for you to remember, of course—makes it much harder for threats to guess.”
In concurrence with NIST’s new recommendations, Calvert comments that not requiring a regular password change may ultimately beneficial to the organization. “One agency I worked for had over 9,000 tickets a year just to reset forgotten passwords. The cost associated with lost productivity and IT support time could be measured in millions of dollars. Not only are regular user accounts impacted, but service accounts running on servers are required to be changed with the same frequency. Many times, there has been a self-inflicted denial of service in the name of security.”
In addition to password security, Calvert urges agencies and other organizations to remember three tenets of information security:
Calvert concludes, “Availability often takes a backseat to the other two, but this new recommendation from NIST—making password creation easier for users—changes that.”
Work with us to adopt the best cybersecurity practices.
Racine Anne Castro
Never miss a thing by signing up for our newsletter. We periodically send out important news, blogs, and other announcements. Don’t worry, we promise not to spam you. | <urn:uuid:7664ec6c-54e3-4946-a39d-b710308cbfb4> | CC-MAIN-2022-40 | https://blog.metrostar.com/cyber-security/nist-password-security-guidelines | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00705.warc.gz | en | 0.928719 | 867 | 2.625 | 3 |
What is My Primary DNS?
A domain name system (DNS) is a system that helps translate the human-friendly domain name you enter into an IP address so devices can communicate. The DNS uses a hierarchical naming scheme to give names and locations of all webpages on the internet.
Primary DNS is a DNS server that you can use to host your own domain or subdomain. Primary and Secondary nameservers work together in order for users to access websites hosted on the internet. When hosting your own domain, it’s best practice to have at least two name servers: one primary and one secondary.
Primary DNS or Secondary DNS
As a primary DNS server, it houses the domain’s original DNS zone files. This means that it can provide name resolution for the domain itself. The secondary DNS server, on the other hand, contains an exact copy of what is in your primary so you will always have a backup if something goes wrong with your primary.
A secondary DNS server is a backup for your primary with the same exact zone files so if something happens to prevent resolution between users and the primary, it will automatically switch over to carry out name resolution requests from clients until such time that you can troubleshoot whatever problem there might be on the primary server.
How does DNS work?
A domain name system (DNS) is a system that helps translate the human-friendly domain name you enter into an IP address so devices can communicate. The DNS uses a hierarchical naming scheme to give names and locations of all webpages on the internet. Primary DNS is a DNS server that you can use to host your own domain or subdomain. Primary and Secondary nameservers work together in order for users to access websites hosted on the internet. When hosting your own domain, it’s best practice to have at least two name servers: one primary and one secondary.
How do I find my primary DNS?
If you are using a Microsoft Windows machine, the easiest way to find your primary DNS server is through the command line.
Open Command Prompt as an administrator. In the CMD window that pops up type in “ipconfig /all” and press enter on your keyboard. This will give you all of the information you need about your internet connection. Look for “DNS Servers” and copy down the numbers next to them, this is what you will use as the primary DNS server when setting up primary nameservers in WHM later on.
Is Primary DNS the same as IP address?
Primary DNS is not the same as your IP address. The primary DNS server of a domain name or subdomain does not change unless you manually make changes to it, whereas an IP address can be changed at any time by users.
Do I Need a Primary Name Server?
It’s important to have both a primary and secondary name server when hosting a website. This is because if the primary name server goes down, the secondary will take over until it can be restored or fixed. Having both servers also adds redundancy to your DNS system in case there are any issues with traffic. Without this backup, you would likely lose users and have diminished performance from people being unable to reach your website.
If you are using AWS Route 53 for your name servers, then you can use it in conjunction with other domain registrars to host your own DNS system. This is great if one day down the road you want to move away from Amazon when hosting nameservers because this allows users to mix and match different providers instead of being locked into one.
What is the Best Primary DNS Service?
Having a primary name server that you can easily access and update whenever necessary is very important when it comes to hosting your own domain names. Some of the best primary DNS services include:
• Amazon Route 53 (AWS) – This allows users an easy way to manage their DNS along with being able to mix and match other providers as well.
• Cloudflare (CF) – Although these DNS servers tend to be a little more expensive than others, they also provide an easy way for users to access their zone files and modify them whenever necessary. They are also very fast in comparison to most other name servers out there.
• Google Cloud DNS (GCD) – These DNS servers are known for being very fast and efficient. They also offer an easy way to manage your zone files without having to worry about other technical aspects of hosting a website or domain names.
Primary DNS servers are best used when hosting your own domain names to give you the most control over how the system works. It’s important to make sure that these servers are always up and running for users who might be trying to access your website or services because they will not have any resolution if there is a problem. | <urn:uuid:0d7cefe0-8e88-4848-9bdb-1f7414195f18> | CC-MAIN-2022-40 | https://gigmocha.com/what-is-my-primary-dns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00705.warc.gz | en | 0.924501 | 982 | 3.328125 | 3 |
Arabic Text-To-Speech engines have taken more time to come to the full maturity of today, in comparison to other languages such as English TTS engines.
Which is quite odd due to the act there are 420 million Arabic speakers globally, which clearly shows there is a need for it.
Why has the Arabic language taken that time?
Simply the language is unique & more complex, hence it requires more time along with heavy R&D investment into AI & NLP to develop and train the engine, which IST have been doing via a team of specialized linguists & data scientists. In Today’s blog, we explore the top 3 unique characteristics of the Arabic language.
1.Arabic language word functionality and structure
Punctuation and vowels are the only way to pronounce Arabic words and sentences in its correct intended form, contextual variation is caused by the tashkīl⟨تَشْكِيل⟩,ḥarakāt ⟨حَرَكَات⟩, i’jam ⟨إِعْجَام⟩ supplementary diacritics so the speaker’s input is crucial to meet the relevancy of the intended context so the main challenge is humanizing the AI to overcome the ambiguity here are some examples for more elaboration.
Here are examples of common ambiguities and how their significance in Arabic word-formation by our linguistics team.
|Ambiguity type||Alternative 1||Alternative 2|
|ACTIVE / PASSIVE||to book حَجَزَ||was booked حُجِزَ|
|Gemination||to do فَعَلَ||to activate فَعَّلَ|
|Dual/Plural||The 2 artists اَلفَنَانَيْنِ||The artists اَلفَنَانِينَ|
|Noun/Verb||To charge شَحَنَ||Charging شَحْن|
2. The Arabic language is word-ordered-based
Typically when writing Arabic language, words are written without being discretized (tashkīl⟨تَشْكِيل⟩), hence the reader will vowelize it in contrast to the context of the sentence.
عَقَدَ ‘to tie’
عَقُدَ ‘became knotty’
شَاهَدَ ‘to watch’
خَبَّرَ ‘to tell’
خَبَرٌ ‘a piece of news’
خَبَرَ ‘to know’
خَبُرَ ‘to become an expert’
خَبِرٌ ‘familiar with’
خَبَرَ ‘to plough’
3. Arabic is a highly inflectional language
Arabic is collapsed into just one word due to the abundance of prefixes, suffixes, pronouns, as a matter of fact Arabic allows up to three consecutive prepositions to precede a word, e.g. the word ‘’وبالوالدين (and with the parents) contains three prefixes: ‘,’ب ‘’و and ‘.’ال
The characteristic of Arabic suffixes is similar to that of prefixes, which does not have a systematic rule for their attachment to Arabic words There are 15 Arabic suffixes; most of them are made of attachable pronouns (Sembok et al., 2011a).
Conquering those 3 mistakes with Nūn TTS
By combining the power and proficiency of AI, Machine Learning & Natural Learning Processing, our team of linguists along with IST’s AI programmers, designed Nūn as an accurate, human-sounding, conversational driven Arabic TTS engine.
Here is how
- Using voice segment concatenation techniques to generate optimized human-sounding Arabic words.
- Creating a contextual NLP Engine, to understand the context of which the words are to know what is the correct pronunciation and intended meaning of the word.
- Humanizing the AI to auto diacritic the Arabic input with preprocessing techniques.
- Manual optimization of every word and letter that took up to 60 years of man-hours to build one of the leading Arabic TTS AI in the region.
3 different Nūn TTS solutions that are focused on an immense customer experience
Bringing your IVR to live with a dynamic data that enhance your CX interacting with your clients with a natural real-time human-like experience with your clients making your brand more trustworthy with packages specially tailored to meets your industry tone of voice
An on-demand simple interface that can be used both on the cloud & offline generating the most complex Arabic natural-sounding voice talents with different audio formats with the support of 20+languages
Get all your human-sounding voice prompts instantly perfectly tuned with the support of the biggest team of linguists and data scientists in the region.
Check our Soundcloud channel for Nūn Text-To-Speech samples | <urn:uuid:b3796d5c-d0ca-4bf0-97f7-034faaeeeecc> | CC-MAIN-2022-40 | https://www.istnetworks.com/blog/the-3-unique-characteristics-of-arabic-tts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00105.warc.gz | en | 0.882436 | 1,243 | 2.515625 | 3 |
Unless you’re an expert, there’s little difference between the Internet of Things (IoT) and the Internet of Everything (IoE). However, the latter term is broader, semantically. In this post, we’ll go into the details to explain why IoT software development companies use the term IoE comparatively rarely.
The term IoT was coined in 1999 to refer to machine-to-machine, or M2M, communication. IoE appeared a few years later, to describe interrelated elements of a whole system, including people. IoE entails not only M2M communication but also P2M (people-to-machine) and even P2P (people-to-people) communication.
To understand the differences between the three types of communication, let’s consider several examples. Say it got dark outside and you turned on a light in the office, then you sat and typed on a keyboard. This scenario provides P2M examples of IoE.
We are so used to these things that we don’t even realize they are part of a system. Another example: You make a Skype call to your colleague. That’s a simple human-to-human, or P2P, communication. An example of M2M communication, on the other hand, is the process of data exchange between your office temperature sensing devices and the HVAC mainframe.
You might think M2M communication, being technological, is the most progressive means of interaction. but IoE focuses on P2M and P2P interactions as the most valuable. According to a Cisco analysis, as of 2022, 55% of connections will be of these two types.
IoE is now considered the next stage of IoT development. Maybe this is why there are so few IoT development companies offering IoE development services at the moment. Internet of Things solutions are now more common and widespread.
4 Main Elements of the IoE Concept
By thing, we mean an element of the system that participates in communication. A thing is an object capable of gathering information and sharing it with other elements of the system. The number of such connected devices, according to Cisco, will exceed 50 billion by 2020.
What are things? In the IoT, a thing could be any object, from a smart gadget to a building rig. In the IoE, that expands to include, say, a nurse, as well as an MRI machine and a “smart” eyedropper. Any element that has a built-in sensing system and is connected on a network can be a part of the IoE.
People play a central role in the IoE concept, as without them there would be no linking bridge, no intelligent connection. It is people who connect the Internet of Things, analyze the received data and make data-driven decisions based on the statistics. People are at the center of M2M, P2M, P2P communications. People can also become connected themselves, for example, nurses working together in a healthcare center.
In 2020, it’s projected that everyone using the internet will be receiving up to 1.7 MB of data per second.
As the amount of data available to us grows, management of all that information becomes more complicated. But it’s a crucial task because, without proper analysis, data is useless. Data is a constituent of both IoT and IoE. But it turns into beneficial insights only in the Internet of Everything. Otherwise, it’s just filling up memory storage.
Process is the component innate to IoE. This is how all the other elements — people, things, data — work together to provide a smart, viable system. When all the elements are properly interconnected, each element receives the needed data and transfers it on to the next receiver. The magic takes place through wired or wireless connections.
Another way to explain this is that IoT describes a network and things, while IoE describes a network, things, and also people, data, and process.
Where Is IoE Applied?
As to the market, we can say confidently that IoT is a technology of any industry. IoE technology is especially relevant to some of the most important fields, including (1) manufacturing, (2) retail, (3) information, (4) finance & insurance, (5) healthcare.
IoE technology has virtually unlimited possibilities. Here’s one example: More than 800 bicyclists die in traffic crashes around the world annually. What if there was a way to connect bike helmets with traffic lights, ambulances, and the hospital ecosystem in a single IoE. Would that increase the chances of survival for at least some of those cyclists?
Another example: Do you realize how much food goes to waste, say at large supermarkets, because food isn’t purchased by its best-before date? Some perishable products like fruit and vegetables are thrown away due to overstocks even before they get to the market. What happens if you find a way to connect your food stocks with the racks and forklifts of the supermarket in-stock control system using IoE?
There are endless variations on uses of IoE right now, and many of them are already becoming familiar in our “smart” homes.
In our industry, few would deny the value of IoE in improving our standard of living. Luckily, there’s a flourishing market of IoT development services. Who knows, maybe one day soon, you’ll be a “thing” in the IoE environment. | <urn:uuid:61f6a528-2913-45f7-9576-f76c95248395> | CC-MAIN-2022-40 | https://www.iotforall.com/ioe-vs-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00105.warc.gz | en | 0.946243 | 1,165 | 3.0625 | 3 |
Topics covered in this Blog:
The domain of Access control and Time-Attendance got a new definition with the introduction of Face Recognition technology. With FR technology, the efficiency of security and surveillance has seen great improvement.
But as is the case with human nature, adding ‘unreal’ facts to anything that is less known is quite common. In such a case scenario, we need to remind ourselves that – ‘Is everything that is being said, is true?’ and make an effort to understand the actual reality behind it.
In spite of all the advantages it presents, the utilization of FR technology seems to be considered problematic and sometimes controversial. But is this in any way true?
Through this blog, we are trying to make an effort to debunk myths surrounding Face Recognition technology.
Some of the Myths around Face Recognition Technology
Facial Recognition identifies everyone
It is NOT possible for everyone’s faces to be recognized by the Face recognition system. If this happens, this means the identified person has his face enrolled into the system. There is NO POSSIBILITY for the face recognition system to randomly recognize someone.
It is next to impossible for the data from facial recognition to be in any way hacked, intercepted, or shared. The reason behind this is that face recognition data is stored in the form of a string corresponding to data points extracted from the face. Also, it is not possible to go back to the face by simply reverse engineering these data points. Thus, it becomes difficult to share the database containing this information.
Face Recognition invades privacy
A huge myth behind FR technology is that once you are registered on one FR database, your information is automatically linked to all such databases.
Let’s make it clear for once and for all! There is NO interconnection between different facial recognition solutions mostly because they use a different technical approach or because their perspectives simply don’t match.
Face Recognition doesn’t identify a person as they age
One of the myths that have been making rounds is that as people age, their face changes which, in turn, causes the system to not detect the individuals.
With time, the facial features of individuals change but this is not much of an issue. The main reason behind this is the regularity of the person’s identification. This helps in keeping the system updated about the changes happening across the face. The system notices these changes and updates its database to the latest changes.
Face Recognition doesn’t identify in the case of face masks
While this is another myth that needs to be busted, this brings us to an important question, Does a face mask restrict face identification?
Face recognition solutions use algorithms to understand prominent facial features. In the case of face masks, the same stays true. If the features are prominent, the system would still be able to recognize the individual even with the face mask. While recognition is possible even with a mask on, it is recommended to avoid it for accurate identification.
Face Recognition Technology is quite expensive!
This is not true at all! Keeping in mind the effectiveness of face recognition, the cost of a face recognition system is quite lower than the most secure credentials such as Retina, Palm vein, etc. with an almost similar level of security.
As compared to various credentials, Face is among the most trending ones! The grade of security is the major reason behind the cost. The higher the security needed, the precisive the face recognition system is used. This in turn increases the cost. The face recognition system that is used in banks is definitely on the higher end of the cost spectrum.
Face recognition used in organizations has affordable costs and provides a high level of security.
Even in this modern world, the rise of technology from the science community gives rise to ‘superstitions’ among the common man!
These myths come either from fear or from an idea that is beyond their beliefs.
The most important point to note is that we have to understand that any worries regarding privacy aren’t to be ignored. That being said, concerns revolving beyond realistic technological limitations should not become the main focus. Making ourselves aware of the Face Recognition technology and its working will not just remove unwanted worries instead, would help us utilize it for improving security around us!
Connect with our experts at email@example.com to understand this great technology and how would it help you reduce most of your security concerns! | <urn:uuid:6e57e426-ba6e-4d6b-91d1-25f0004d12c6> | CC-MAIN-2022-40 | https://www.matrixaccesscontrol.com/blog/busting-myth-about-face-recognition-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00105.warc.gz | en | 0.941393 | 947 | 2.671875 | 3 |
The complete commands and processes to which a computer operates cannot be covered in a day nor will be remembered in one sitting. Ideally, only the important things for consideration such as that of user friendly commands to allow proper interface between the computer and the user behind the keyboard will always be the only know-how that would remain.
Computer operating systems such as Linux and Windows offer a wide variety of benefits for people, especially in maximizing the capabilities of a computer and the installed software. Speed and reliability are among the important aspects that computer owner will always want and to be able to perform them, proper identification and references would need to be researched on.
It is a given that most people would not spend time studying all the aspects of a computer system. However, there will be instances when such accidental discoveries from exploring the computer operating system and its resources would ignite interest and push a person into further exploring information surrounding the issue and perhaps look at other benefits that operating systems provide but are not given much attention.
[tags]windows, linux, secrets, system hints, system resources[/tags] | <urn:uuid:a4121c32-e1a6-4dea-920f-e37401e33b4f> | CC-MAIN-2022-40 | https://www.it-security-blog.com/tag/secrets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00105.warc.gz | en | 0.967277 | 217 | 2.859375 | 3 |
Border Gateway Protocol (BGP) is a core routing protocol used by most of the Internet Service Providers (ISPs). BGP’s role is to exchange routing and reachability information between autonomous systems (ASes) on the Internet. An AS can be an ISP, a university or the entire corporate network. Each AS is represented by a unique number called an AS number (ASN). The set of ASes along the path between two Autonomous Systems on the Internet is called BGP AS_PATH. This is one of the attributes that is evaluated in the BGP best path selection process.
BGP AS PATH Attribute for Network Path Visibility
BGP routing information provides full Internet path visibility. With a simple check of a routing table, network operators can determine the source and target ASes and all transit ASes through which the packet moves on its way to the destination. When a BGP router sends out an update to a neighbor in a different AS (i.e., an external or eBGP neighbor), it adds its own AS number to the front (left side) of the AS path. So the AS path lists all the ASes that need to be traversed to reach the location where the prefix that the path is attached to is advertised from. Let’s check how the AS_PATH attribute is built when a prefix 22.214.171.124/24 originated on the router AS1 is sent in a BGP update message and received by the router AS5 (Picture 1).
Picture 1: Network Topology with 5 eBGP Neighbors
The BGP router in AS 1 sends a BGP update message to its eBGP neighbor with its own AS number (ASN) 1. The neighbor in AS 2 adds its ASN 2 to the front (left) side of the AS path in a BGP update. The AS_PATH attribute is now 2 1. The neighbor in AS 3 prepends the AS_PATH with its own ASN 3. The AS_PATH is now 3 2 1. And again, when the router AS4 receives the BGP update message with the AS_PATH 3 2 1 from AS3, it adds its ASN 4 to the front of the AS_PATH. The AS_PATH attribute received by the router AS5 in a BGP update from the peer AS 4 is 4 3 2 1 for NLRI 126.96.36.199/24 (Picture 2).
Picture 2: AS_PATH for NLRI 188.8.131.52/24 on AS5
NetFlow and Network Path Utilization
BGP does a great job in providing visibility of network paths so we have a clear picture of how traffic is forwarded between ASses. However, BGP alone says nothing about how these paths are utilized. NetFlow, on the other hand, can report how much traffic is traversing the paths in real-time. It provides complete traffic statistic including Layer2 (VLAN headers, MAC addresses), Layer3 (IP addresses, protocol) Layer4 (TCP/UDP ports) information, timestamps VRF IDs, etc. The nature of NetFlow makes it a valuable tool for investigation of the inbound traffic for a certain pattern matching. As we explained in our previous blog posts, NetFlow analysis plays an important part in DDoS amplification attacks, web application and SSH compromise attack detection.
BGP Support in NetFlow
Although, NetFlow reports the amount of traffic on any given path, its ability to report on how the traffic gets into the AS is limited. As a matter of fact, it merely depends on BGP support in different NetFlow versions.
BGP in NetFlow v5
NetFlow v5 reports the source and destination ASes, peer ASses and BGP next-hop. Let’s explain it using a network topology depicted on Picture 1. The AS3 router is configured with the legacy NetFlow v5 in ingress direction for the interface Gi0/0. Traffic is sent from AS5 to AS1 and AS collection is included in NetFlow export with the option orgin-as. In this case, NetFlow reports ASN5 as source and ASN1 as destination ASes, with the BGP next-hop 184.108.40.206. If an option peer-as is used instead of origin-as, the ASN4 and ASN2 are exported instead, along with the next-hop 220.127.116.11. In both cases, only origin or peer ASN information is exported in flows.
BGP in NetFlow v9
NetFlow v9 allows us to collect both origin-as and peers-as simultaneously as you can see from the last four lines under the flow record configuration. If traffic is sent from AS5 to AS1, Netflow reports ASN5 and ASN1 as a source and destination ASes and AS4 and AS2 as peer ASes, with the BGP next-hop 18.104.22.168 (Picture 3). Thanks to it, network operators can plan outbound traffic accordingly, carefully selecting an appropriate exit point. For instance, they can increase the weight (Cisco only) or LOCAL_PREF per neighbor or prefix basis to prefer a certain exit router to the others.
flow record BGP-record match ipv4 source address match ipv4 destination address match transport source-port match transport destination-port match interface input match ipv4 protocol collect counter packets collect counter bytes collect timestamp sys-uptime first collect timestamp sys-uptime last collect routing next-hop address ipv4 bgp collect routing source as collect routing destination as collect routing source as peer collect routing destination as peer flow monitor BGP-monitor record BGP-record
Picture 3: NetFlow v9 Record with Both Origin-as and Peers-as Information
While BGP implementation in NetFlow v9 provides higher AS path visibility when compared to the legacy v5, it is still limited to a partial AS view only. For instance, if we add the AS6 into the topology depicted on Picture 1 between the routers AS4 and AS5, NetFlow configured on AS3 will not report the AS6. In order to provide full BGP path visibility and path utilization, BGP must be bundled with NetFlow.
BGP gives us an ability to understand how network traffic is forwarded between ASes on the Internet. BGP in conjunction with NetFlow provides information about the type and amount of traffic on the paths interconnecting ASes. However, it is only possible if BGP attributes such as AS_PATH, are extracted from the BGP table and correlated with NetFlow records. | <urn:uuid:6f1868ab-2ec1-41c7-8188-3b68693f88f5> | CC-MAIN-2022-40 | https://www.noction.com/blog/netflow-bgp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00105.warc.gz | en | 0.893146 | 1,387 | 4.125 | 4 |
Anti-Censorship are methods to combat censorship – for example, preventing search results from being blocked or interfered with. The growth of online platforms (Facebook, Instagram, Twitter, etc.) raises important questions around how modern technology and communication is consumed. Today, the majority of people follow the news, stay in touch with friends and family, and share views on current events through social media and similar online platforms. How those platforms filter and manipulate what is viewed by different people may be construed as censorship.
Other posts on Twitter and Facebook are now being flagged as “questionable” or “misleading” which is a form of censorship as well.
Without wading into the hotly contested debates over the moral ethics of censorship and anti-censorship, CyberHoot believes you should know that algorithms and technology are being developed to filter and censor free speech online today. From DeepFakes to disinformation campaigns, the Internet is overflowing with opinions and counter-opinions. Become more aware of the subtle influences on our consumption of media to know what influences may be happening. | <urn:uuid:e5286d58-ee52-4341-a640-27651bf17d96> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/anti-censorship/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00306.warc.gz | en | 0.950067 | 232 | 3.21875 | 3 |
A look at the world of computer forensics
When a client’s IT systems are impacted by a security incident, cyber investigators (or digital forensics investigators) track down the hackers responsible. This profession is still fairly new, but sure to grow as more and more of our day-to-day activities take place in cyberspace.
As it stands, many small and medium businesses that fall victim to a cyberattack aren’t sure where to turn. Digital forensics investigators do important work, but few people are aware of it.
Do you know what to do if your data is held hostage by a nefarious cybercriminal, or even if you suspect an internal breach?
The fascinating field of digital forensics has its own set of standard procedures. The key is to rely on experts and take certain precautions. Let us walk you through the basic elements of an investigation to help you protect your company in the event of a dispute.
Authorizing an investigation
Of course, an investigation can’t begin without official authorization, which will be incorporated in the contract signed between the victim company and digital forensics specialist or firm.
If the incident has legal implications, court approval is required before data can be collected. The scope of the investigation will generally also be determined by the court, based on a few different factors.
If the case concerns an organization’s human resources, for example, either upper management or the internal legal team will give the green light.
Authorization has been granted, the investigation is underway. So what’s the next step? What are we looking for? Earlier we talked about “tracking down” hackers. This image is more accurate than you might think! Even the smallest move in the digital world leaves a trace.
In the case of an intrusion, theft or leak of sensitive information, the investigator looks for clues in a few different places. The computer’s hard drive, of course, will contain information that can be used as evidence. RAM and browsing history also come to mind.
There are endless directions to explore in such a hyperconnected world. The call history for a landline or mobile phone, a printer’s job record, network equipment settings or computer system logs might be useful.
Online services like Facebook, Snapchat, Instagram, Twitter and LinkedIn save data and metadata concerning communications, locations and movements, which can then be linked to the activities of individuals in the real world.
You might be amazed by the sheer amount of evidence stored on the electronic devices you use, but this volatile data can be erased.
RAM and server audit logs, for instance, are regularly refreshed. As soon as you suspect that an investigation may be required, there are three main reasons to take protective measures:
- Preventing data from being altered or erased
- Guaranteeing the replicability of analyses and processes
- Ensuring the receivability of evidence in court
In most cases, it’s enough to save a copy of the data stored on an affected device. There are a variety of specialized tools that perform this operation without altering data at rest.
Duplicating the evidence enables investigators to work at their own pace in a lab environment, without impacting the IT ecosystem.
In some cases, however, the computers in question may need to be powered off until the problem is resolved.
It should be clear by now that relying on proven methods is essential for a cyberinvestigation. That’s why all of Forensik’s experts are fully certified and use only trusted tools.
In the wake of a security incident, the slightest manipulation can jeopardize your entire investigation. Don’t take any risks! Make sure you haven’t missed anything by getting in touch with a specialist.
In the meantime, follow our advice to perform an information security health check for your company. You’ll never have to worry about an investigation process!
Follow us on LinkedIn
Our Facebook page | <urn:uuid:a8ff1471-92bc-4251-9f2c-5eda09168cb3> | CC-MAIN-2022-40 | https://forensik.ca/en/a-look-at-the-world-of-computer-forensics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00306.warc.gz | en | 0.932559 | 827 | 2.796875 | 3 |
Navigating through the changing world of IT can be extremely confusing, but having an effective IT management team to help develop a strategy for you helps in keeping your organization afloat. An IT steering committee is the best way to plan for the company’s IT needs while ensuring that everything still aligns with your goals.
So what exactly is an IT steering committee and how can they help with your projects? An IT steering committee is made up of stakeholders that decide on the organization’s IT priorities and anything related to the management of different IT projects. They’re mainly responsible for providing a strategic direction for the team and supporting the projects that require the opinion of IT experts.
IT Steering Committee: How They Can Help With Your Projects
Steering committees are the group of people that make all the decisions regarding the priorities of a business and how to manage these operations. These groups are common for most organizations, but having one that’s specific to information technology might be something new for the company. Here’s everything to know about IT steering committees before you make one:
What’s An IT Steering Committee?
According to a 2012 study, almost 80% of IT organizations have steering committees that help them align their IT goals and objectives with the business’ needs and priorities. The main reason why many companies have started developing steering committees specific to IT is to bring both IT experts and business stakeholders together and start making decisions that affect both aspects of running the organization.
IT steering committees also promote teamwork between the IT and business teams of the organization since they’re given the duty and authority to create strategic decisions. These committees are usually made up of departmental heads and business executives who don’t actually perform the work itself, but are responsible for ensuring that the project is completed on time, on budget, and within the best methods.
What Do They Do?
IT steering committees have different goals that depend on the needs of the organization, but their main goal is to provide strategic directions and decisions regarding IT projects. They’re also concerned with discussing how different IT services and practices apply to non-IT business needs, like cybersecurity and disaster recovery plans.
Here’s a more detailed list of what other things IT steering committees are responsible for:
- Become advocates for different IT-related projects and initiatives within the company
- Set the strategic direction of specific projects that involve information technology
- Establish the goals and scopes of different projects and provide metrics to measure the project’s success
- Evaluate project plans and decide whether to approve or reject proposed changes to various project plans
- Select the right project managers and industry experts to provide support and guidance to the project
- Monitor the project processes and plans to ensure that everything goes smoothly
- Resolve conflicts between involved parties that are working together for the project
- Come up with strategies that help solve the problems and challenges faced during the duration of the project
- Provide expert advice or input to concerns and issues related to the projects at hand or the overall growth of the business
- Create policies and regulations related to information technology within the organization
- Monitor the status of the project and make changes as necessary
- Identify and eliminate different risks that might affect the project and the entire business
How To Organize the Best IT Steering Committee
While IT steering committees greatly improve project management in an organization, there are also challenges that the company faces when establishing this kind of committee, like the difficulty of handling new people with different personalities, an increase in the number of meetings required, and the possibility of the members only being concerned about their interests.
By knowing these challenges, team leaders can make an informed decision when it comes to creating the best IT steering committee for the organization. Here are a few essential steps when organizing an effective IT steering committee:
- Picking the Right People – There are lots of factors to consider when creating an IT steering committee. Aside from their skills and expertise, it’s also important to pick someone who can work in a team. There’s also the individual’s position within the organization – concerned departments in the organization should have a representative that has the appropriate decision-making authority.
- Providing Training and Coaching – Some members of the IT steering committee may be unfamiliar with how it works, so the leader must provide appropriate training and coaching to help them ease into their duties and responsibilities.
- Discussing Everything About the Project – Regardless of an individual’s experience in serving as part of a steering committee, each of them should understand the plan, purpose, description, and scope of the project. They should receive all necessary information about the project before joining the committee or attending a meeting.
- Keeping a Manageable Size – Having too many opinions on the table makes it difficult for organizations to reach a conclusion, so it’s important to only keep a manageable size for the IT steering committee. It should only be large enough to represent the important departments of the organization but not too big that it affects everyone’s efficiency.
- Having a Liaison Between the IT Steering Committee and Project Manager – Most companies also appoint the project manager themselves as part of the IT steering community to help minimize confusion and misunderstandings about the concerns and decisions. It’s also a great way to disperse the information uniformly across all concerned parties.
Abacus: Your Partner In IT Project Management
IT steering committees are extremely useful for improving project management in an organization. If you’re looking for a reliable team of highly-experienced engineers and support personnel for all your IT needs, count on Abacus to provide IT-related services and products at the right price. We work hard in keeping businesses up and running and make operations smoother, faster, and more secure.
Call us today to find out which of our fixed-cost business support IT plans work for you. | <urn:uuid:5403faef-9931-4aa1-b1ab-08168029f739> | CC-MAIN-2022-40 | https://goabacus.com/abacus-services-it-steering-committee/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00306.warc.gz | en | 0.943197 | 1,206 | 2.671875 | 3 |
An Analyzer report is a collection of fields and filters that is displayed in a specific report format. You can think of a report as a file, like a spreadsheet file, except that when you open a report or make a change, the report connects to your database so it displays the latest data. Reports are stored in a repository, so you can access reports from any computer.
When you create a new Analyzer report from the ground up, you select a data source first. The data source determines which fields will be available when you build your report. For example, if you choose orders as your analysis area, you most likely will see all fields related to orders.
Where does the data come from?
Pentaho Analyzer leverages OLAP technology and multi-dimensional query expressions (MDX) to dynamically retrieve data from relational databases (RDBMS). Analyzer is most often used to query data in an organization’s data warehouse, which generally consolidates data from multiple source systems into a common place for information analysis and reporting.
Pentaho Data Integration (PDI) is a popular tool used for building and populating data warehouses and usable data models. PDI can load data from applications, databases, and spreadsheets within your company, as well as from external and public data sources. Tools like PDI are typically managed by your system administrator.
Fields in Analyzer
Examples of fields include 'Sales Revenue', 'Profit Margin' ,'Product Name', 'Region Name', and 'Fiscal Year'. Fields are what define the content of your report.
Types of fields in Analyzer
The following types of fields are available:
Fields, such as Names, Types, and Categories are most often text-based. For example, if you were working for an athletic equipment vendor, you would use Product Name level in your reports. In this level, you might have 'Snow Sports' and 'Cycling' as possible values for the Product Name field. These individual values are often referred to as members of that level.
Time Period Fields
While Time fields are technically Level fields as well, Time fields are critical to nearly all reports and are often regarded as their own category of fields. Time period fields such as 'Fiscal Year' and 'Order Month' are commonly used in reports. Possible values for those fields could be '2004' and 'Jan-2006', respectively.
Measure fields are numeric and most often represent business metrics. These types of fields are designed for mathematical activities such as summing, dividing, and creating averages. 'Sales Revenue' and 'Profit Margin' are examples of measures.
In Analyzer reports, fields are color-coded by type in both the report and the Layout panel. The colors are assigned as follows:
- Levels including Time Period fields are defaulted with a yellow background.
- Measures are defaulted with a blue background.
You can create a report without any knowledge of field types, but knowing how field types work can sometimes help you understand how different charts display data and how filters work together.
About field hierarchies
Some level fields (time periods, names, types, categories, etc.) belong to field hierarchies. Here are two examples of field hierarchies:
- Product Line >>Product Name
- Year >>Quarter >>Month >> Week >> Day
The field hierarchies help you in two primary ways.
First, it provides a quick and easy way to drill into more details on a report:
- When you click on a level field on the report, such as Fiscal Quarter, and then click Also Show from the context menu, all these fields will be available for selection if the field is part of a hierarchy.
- When you click on a level field value on the report, such as the year '2007', the context menu displays the option Keep Only 2007 And Show Quarters.
Second, when creating a filter, field hierarchies narrow down the list of available values. For example, if you have a filter Product Line='Snow Sports', then the list of possible choices when you filter Product Names are limited to the products that are part of the Snow Sports product line.
Additionally, field hierarchies sometimes control how fields are placed on the report. For example, fields from the same field hierarchy need to be placed on the same axis (row/column) and the report will automatically enforce this rule as you move and arrange your fields.
View the definition of a field
In the Layout panel or the report, right-click the field name.
Click Tell me about from the menu.The About dialog box displays for the field.
View the following information about the field:
Field property Description Display Name
The name of the field as it appears in the Available Fields list and your report. If you renamed this field in the report, a notification with the original name will display below.
If you are assigned the Manage Data Sources permission, you can edit the name for this field. The edited name will display in the Available Fields list, as well as in the Layout panel and the Report pane unless you have renamed the field. Renaming a field within the Layout panel or the report pane will not affect the display name of the field in the Available Fields list.
Type The type of field, such as level, time, or measure. Description The description of the field, if any. MDX The formula for the level or field as an MDX statement. Member Properties
If a field has a number in parenthesis next to it in the Available Fields list, such as Customer(6), that means that the dimension has member properties associated with it. When you open the About dialog box, you will also see a list of the Member Properties in addition to the other details about the field.
If you open the field layout, you can see your dimensions in either the Row Labels or Col Headers fields, depending on how you have them oriented. To constrain a dimension by controlling its member properties, right-click on a dimension in the row label or column header fields, then select Show Properties from the context menu. A sub-menu with all available member properties appears. Check or clear the member property boxes to add or remove them from the report.
Viewing and editing field properties
You can view the properties of a level or measure field from the Available Fields list in Analyzer. The properties include those attributes which defined the field when the data model was built.
View and edit field properties
In the Available Fields list, right-click the field name you want to view or edit.
From the menu that displays, click Properties.The Properties dialog box displays for the field.
View or edit the following information about the field:
Field property Description Display Name
The name of the field as it appears in the Available Fields list. If you renamed this field in the Layout panel or the report pane, a notification with the new name will display below.
If you are assigned the Manage Data Sources permission, you can edit the name of this field. The edited name will display in the Available Fields list, as well as in the Layout panel and the Report pane unless you have renamed the field. Renaming a field within the Layout panel or the report pane will not affect the display name of the field in the Available Fields list.
Aggregation (for measures only) The aggregation type is how the measure combines the data. Use the drop-down list to select an aggregation type from a system-defined list. Options include:
Choose how this level or measure should be formatted, such as currency, general number, percentage, or date. Use the drop-down arrow to select a format from a system-defined list, or type in the field to enter a custom format. Note that the Format field only displays when the value for the field is a number or a date.
See Format Field Options for more information on selecting the appropriate format for your report.
Description The description of the field, if any. This field is always read-only. Type The type of field, such as level, time, or measure. This field is always read-only. MDX The formula for the level or field as an MDX statement. This field is always read-only. Member Properties If a field has a number in parenthesis next to it in the Available Fields list, such as Customer(6), then the dimension has member properties associated with it. When you open the Properties dialog box, you will also see a list of the member properties in addition to other details about the field. This field is always read-only.
View a level with member properties in a report
Locate a dimension in the Available Fields list which includes a number in parentheses, such as Customer (6) and Product (3).
Locate the corresponding dimension on your report. Right-click the row or column header for that dimension, then click Show Properties.A menu displays the member properties you can choose to appear in the report.
Select or clear the member property you want in the report, then click OK.
Editing measure properties
When you update the properties on measures in Analyzer, including calculated measures, you are making a change to the data source which will affect all users who are creating reports based on that data source. Such changes require users to be assigned the Manage Data Sources operation permission in Users and Roles. For more detailed information on viewing and editing properties for both base measures and calculated measure, see Updating Measure Properties.
Rename a field
Right-click the field you want to rename in the report.
Select Edit or Column Name and Format from the menu to open the Edit dialog box for that field.
Enter the new name in the Name field.Note that you can also view the original name of the field in this dialog box.
(Optional) Enter the plural version of the new name (if applicable) in the Plural Name used within this report field.Plural versions of a field name are useful because the Pentaho interface often uses field names in menus and dialog boxes. If you enter a plural version of the new field name, it automatically will be used in situations where the plural form is grammatically correct.
Click the OK button to save the new field name.
Working with the Available Fields list
You can work with the Available Fields list in several ways. You can organize the list with sorting options, find a field using the Find box, and add fields to the list.
View the List of Available fields
Click the View button at the top of the pane.
Select one of the following sort options for organizing the list of fields:
By Category (default)
This grouping is set by an administrator.
Lets you see the list where all measure fields come first, followed by level fields.
A to Z
Alphabetical order with no grouping.
This displays the grouping as defined by the administrator in the cube’s underlying schema.
Add a field to a report
You can add fields from the Available Fields list.
From the Available Fields list, you can add fields to a report using the following methods.
- Select a field, and drag it into the Report pane. A visual indicator (black line) lets you place the field where you want it.
- Select a field and drag it to a drop area in the Layout panel. Note the visual indicator when you drag a field over a valid drop area.
- Right-click a field and select Add to Report.
- Double-click a field.
Move fields in a report
- In a table report, the easiest method is to simply drag the field to a new location.
- In chart mode, do the following:
Open the Layout panel.
Select and drag fields within and between the three different drop areas.NoteYou can only move a field within zones of the same type: blue for measures and yellow for levels/time periods.
Remove Fields in a Report
Select the name of the field you want to remove (a trash can appears) and drag it to the lower-right corner of the report or into the Available Fields list.
(Optional): Right-click on the name of the field you want to remove, and then select Remove from Report from the menu.
Hide and unhide fields
You can select to hide or show fields in the list of Available Fields for a report. Hiding fields is helpful when you want a clear view of only those fields you are interested in for your report.
Format field options
The Format field for Properties allows users to select values based on numerals and calendar dates. Below is a list of supported numeric and date formats you can select for the field or measure.
For more detailed information about numeric and date format strings, view this article about MDX and format definitions.
|$ #,##0||$ 12, 345|
|$ #,##0.00||$ 12, 345.09|
|$ -#,##0.00||$ -12, 345.09|
|$ (#,##0.00)||$ (12, 345.09)|
|$ #,##0.00;(#,##0.00)||$ 12, 345.09|
|0 %||1234509 %|
|0.00 %||1234509.00 %|
|MMMMM d, yyyy||April 1, 2016|
|M/d/yy h:mm AM/PM||4/1/2016 8:09 PM|
|M/d/yy h:mm||4/1/2016 20:09|
|h:mm AM/PM||8:09 PM|
|h:mm:ss AM/PM||8:09:06 PM|
Managing fields in large reports
You can add fields that have an arbitrary number of values, but large reports will be truncated. Truncated table reports differ from full reports in the following ways:
- The Report Status Bar displays the number of rows/columns shown versus the number of rows/columns in the full report. Cells will be cut until the number of cells is less than or equal to 2000. Note that this limit can be increased by your administrator. Rows are cut first, down to a minimum of 10 rows, followed by columns. This technique ensures that you still generate a useful sample of the row values despite the truncation.
- Subtotals and Grand Totals do not display in truncated reports
- A message at the end of the report informs you of the truncation. Note that the data in the cells does not change because of the truncation.
For charts, there is a maximum value of plot points which can be displayed on any axis. This limit is different depending on the type of chart and based on the amount of data which can reasonably fit on a screen. You can change this limit in Chart Options.
Troubleshooting: Your report does not display data
In some situations, your report might not display any data. The table below outlines the most likely scenarios and their solutions.
|What you did||What happened||Likely Reason||Example||Solution|
|You added or modified a filter||The report returned blank.||The filter(s) you added are too restrictive.||Your filter only includes the year '1997' but you have sales revenue only for '2005'.||Change your filters or change the report options to show rows or columns where the number cell is blank.|
|You added a new number field.||The report returned blank.||There are no values for the number field(s) that in the report.||You added the "Quota" field but you have not yet loaded any Quota data into Pentaho.||Contact your administrator to: 1) get data loaded into this field OR 2) hide this field.|
|You added a new text field. You have no number fields on the report.||The report returned blank.||You have two or more text fields on the report but in some cases Pentaho Analyzer needs a number field to tie it all together.||You have Account Name and Order Status on the report||Add a number field.| | <urn:uuid:12c98f71-dc4f-4e87-9d3c-c852616b9be1> | CC-MAIN-2022-40 | https://help.hitachivantara.com/Documentation/Pentaho/9.1/Products/Working_with_Analyzer_fields | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00306.warc.gz | en | 0.862193 | 3,739 | 2.640625 | 3 |
SQL injection flaws are very critical. A remote attacker will gain access to the underlying database. In the worst case scenario it allows the attacker to read, write and delete content in the database.
Risk of SQL Injection
The attacker can gain access to all data stored on the system. It makes it possible to read, create and delete data. Popular attacks include the stealing of passwords and changes in the websites content. Under some circumstances remote command execution might also be possible.
In 2009 the Heartland Payment Systems got compromised by an SQL injection attack. It resulted in a leak of 134 million credit cards.
SQL Injection example
This is a sanitization issue. The most common flaw is the lack of sanitization of user input that are used to set up an ad-hoc SQL query. If not properly sanitized, the attacker can force its way to inject valid SQL syntax in original query, thus modifying its prior purpose.
A sample of a vulnerable “login” for PHP/MySQL would look something like this:
$db = new mysqli('localhost', 'root', 'passwd', 'base'); $result = $db->query('SELECT * FROM users WHERE user="'.$_GET['user'].'" AND pass= "'.$_GET['password'].'"');
Suppose an attacker submits ” OR 1 — as username and whatever as password. The variables would then contain these values:
$_GET['user'] = " OR 1 -- $_GET['password'] = whatever
The resulting query would become:
SELECT * FROM users WHERE user="" OR 1 -- AND pass="whatever"
Everything after — (which indicates the start of a comment in SQL) will be discarded and ignored. The query to be executed would then look like this:
SELECT * FROM users WHERE user="" OR 1
The query now states “Grab everything (SELECT *) from the user list (FROM users)where the username matches nothing (WHERE user=””) or 1 (which will be interpreted asTrue (OR 1))“. Since the latter statement will always result in True, the right hand of the statement will successfully eliminate the left hand statement and the condition will always be true. The result of that query would be the same as this one:
SELECT * FROM users
Which would return all data there is about all the users. E.g, the injection in the$_GET[‘user’] parameter is enough to make the MySQL server to select the first user and grant the attacker access to that user.
Prepared statements will protect against (almost) all SQL injection vulnerabilities. They take the form of a template in which certain constant values are substituted during execution for variables containing user input. This way, you can make sure of the type of the substitutes and it will also escape all bad characters that might break an SQL statement. Hence leaving the SQL query properly sanitized as no user input may break the query.
Some functions like mysqli_real_escape_string() in PHP can also protect against them. But careful to read documentation when using those kind of functions. For example, in PHP addslashes() may seem to be a good alternative but cheap when it comes to SQL injection protection due to malicious charset tricks.
How Detectify can help
Detectify is an automated web security scanner that checks your website for hundreds of security issues including SQL injection vulnerabilities. Sign up for a 14-day free trial and find out if your site is vulnerable » | <urn:uuid:e48dee65-e393-4e5f-b14a-ee09c416821d> | CC-MAIN-2022-40 | https://blog.detectify.com/2016/03/08/what-is-a-sql-injection-and-how-do-you-fix-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00306.warc.gz | en | 0.882384 | 734 | 3.28125 | 3 |
Digital divide stands for the gap between those organisations that are benefiting from the digital age and those who lag behind. Organisations without the benefit of up-to-date digital telecommunication technologies are at an economic disadvantage, as they are less able to respond to market trends.
Mobile business communication in the Covid-19 era
It would be fair to say that most companies were unprepared for Covid-19 and the associated lockdowns. The pandemic necessitated a shift to remote working, where possible, and changing from in-person collaborations to digital experiences.
That said, the pandemic highlighted an event that was already in motion; a shift from an industrial economy to one based around information. The disruption to markets around the world, as well as to ways of working, emphasised the necessity of the digital agenda.
Those organisations that had already embraced digital telecommunications networks were able to adapt far more easily to remote working than those that had not.
A digital divide in modern enterprise
According to a recent survey by Harvey Nash and KPMG, organisations that are digital leaders are twice as likely to be effective at scaling innovation, compared with their peers. They are also three times better at providing a positive customer experience.
Organisations that fail to embrace digital services find their responsiveness to changes in the market environment incrementally lags, caused by their reliance on legacy telecommunication systems services.
Likewise, as digital systems evolve and update, the interoperability and connectivity with legacy systems decreases, leading to an ever-reducing rate of information flow within an organisation.
It can be tempting to rely on existing technologies: there is an inherent danger of thinking “If it isn’t broken, why fix it?”
However, such a philosophy can lead to significant interruptions, such as when connectivity fails due to a system update.
Evolving their digital telecommunication infrastructure allows organisations to maintain interoperability and mitigate the effects of service interruptions with controlled updates.
Reducing the digital divide
Undergoing such a digital transformation ensures organisations thrive in modern enterprise. However, as no two organisations are ever the same, their associated communications architecture will be unique.
Bridging the digital divide in business communications can only be done with a thorough understanding of an organisation’s existing telecommunication infrastructure. This should consider how an organisation communicates, both internally and with external partners, as well as their projected growth and development.
Just as an organisation grows and evolves, so do its telecommunication requirements. What was once appropriate, at the time the telecommunication infrastructure was installed, can soon become obsolete if it is unable to meet the required growth as an organisation expands.
Enhancing mobile business technologies can be a significant expense. However, such investments allow for greater flexibility and adaptability, as well as swifter responses to a shifting market.
Evolve or devolve
Successfully executing a digital transformation in business communications has become a matter of either prospering or struggling to survive as an organisation. The process itself can be initially disruptive, in the short term, but the benefits to a company’s business telecommunication will make returns on the investment. | <urn:uuid:a86ef284-dad6-4c4d-b1d2-b349dd689d0d> | CC-MAIN-2022-40 | https://www.freemove.com/magazine/mobile-business-communication-the-impact-of-digital-divide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00506.warc.gz | en | 0.954597 | 641 | 2.96875 | 3 |
As COVID-19 has spread throughout the world, it has driven more and more of our lives online. We’re relying on online shopping to buy goods and services, digital collaboration tools for work and learning – and increasingly, on social media platforms for information about the pandemic. A recent study saw a 25% increase in the volume of posts on Twitter since the spread of COVID-19 began. Unfortunately, some of that increasing volume includes misinformation about COVID-19 – enough that Twitter outlined the criteria it uses to assess and remove misleading pandemic information on its platform.
Misinformation online isn’t an isolated phenomenon related to COVID-19. A well-known 2018 MIT study found that misinformation spreads rapidly online no matter the topic, reaching more people than the truth and spreading faster. Unintentional sharing of incorrect information, sensationalism, rumor, and urban legends proliferate; we see spamming and trolling attempts; we even see complex and deliberate attempts to misinform, like deepfakes.
Working with Indraprastha Institute of Information Technology Delhi, we’ve developed a robust AI-driven approach to this problem. Our solution: a semi-supervised end-to-end attention neural network that detects Twitter posts with misinformation about COVID-19. In early pilots on a dataset of more than 21 million COVID-19-related tweets, it identifies posts with misinformation with 95% accuracy, significantly outperforming comparable algorithms.
This kind of semi-automated detection is key in light of the growing misinformation challenge. When it comes to COVID-19, the WHO director general stated recently that “We’re not just fighting an epidemic; we’re fighting an infodemic.” The spread, speed and complexity of misinformation in social media is overwhelming the human capacity to manually fact-check and regulate it, and companies are increasingly deploying artificial intelligence to assist human fact-checkers. Still, most current AI models need humans to manually label or categorize large amounts of data before the systems can work. Even then, they struggle to identify misinformation that differs from what was found in the training data.
With the evolving avalanche of misinformation about COVID-19, these are significant challenges. The types of related misinformation range from incorrect health advice (for example, “eating garlic cures the virus,”) to false information about its origin and spread (for example, “5G networks are related to the spread of the virus,”) and false information about its severity (“Coronavirus is just like a normal cold, or a mild flu”). It’s hard to say which of these causes the most harm, but all are potentially dangerous to people’s health and safety.
Unlike other attempts at AI detection of misinformation, our solution considers multiple pieces of context to determine if information is genuine. It doesn’t just look at the content of a tweet, but also information about the user who posted it, for example – and finds the right balance with which to weigh those inputs.
It’s semi-supervised in that it can leverage both labeled and unlabeled data; it learns the semantics and meaning from unlabeled data.
Why end-to-end? Because it also keeps up with changing information and emerging misinformation trends by leveraging external knowledge (from both reliable and unreliable sources). And finally, it’s explainable – it can tell you why it thinks a particular post contains misinformation.
Our approach uses linguistic analysis of the message content itself, such as the terms in the post, incongruity and sarcasm, the sentiment expressed, and so on. But it can also look at the background of the user, the social network context, number of reposts, etc., to find the tweet’s virality – false and sensational viral posts tend to spread faster and wider. It also incorporates automated checking of the topic and claims against fact-checking sites such as Snopes in real-time. Being able to identify individual claims within a larger piece of content helps catch misinformation embedded in otherwise innocuous material – something that other approaches often miss. None of these approaches may be effective while applied individually, but they can be quite powerful when applied together.
Just how powerful? To test our approach, we began by developing a dataset of publicly available tweets. The dataset is a mix of both labeled and unlabeled posts – more than 45,000 labeled tweets, about 60% of which contained misinformation, and more than 21 million additional unlabeled COVID-related tweets We compared the accuracy of the model on this dataset with seven state-of-the-art models for detection of misinformation, and it outperformed them all by at least 9%. We’re in the process of doing additional testing on other published datasets.
This is an early effort, but with promising results, especially since our AI can quickly respond to emerging events that generate more misinformation. It could be easily incorporated into workstreams to assist human moderators, identifying possible misinformation with supporting information. This would not only make it easier for moderators to find and remove misinformation, but potentially respond to those who inadvertently shared the content with links to reliable sources. Over time, this could reduce the amount of misinformation that’s inadvertently shared in the first place.
We are working to make the system scalable and capable of tackling a wide range of topics, pandemic related or otherwise. Stay tuned to learn more about our efforts!
The authors would like to acknowledge Professor Tanmoy Chakraborty of Indraprastha Institute of Information Technology Delhi for his collaboration on this research. For more information about our work in this space, contact Shubhashis Sengupta. | <urn:uuid:0321f69e-7d33-4f51-803f-78f5e45edc91> | CC-MAIN-2022-40 | https://www.accenture.com/us-en/blogs/technology-innovation/sengupta-fano-tackling-medical-misinformation-in-social-media-with-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00506.warc.gz | en | 0.918086 | 1,180 | 3.171875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.