text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
In my latest training video, I cover the theory of Open Shortest Path First (OSPF) routing protocol.
OSPF is the most popular interior gateway protocol used today. In this video, you’ll understand:
Enjoy the training!
In the previous part of our OSPF series, we examined options for manually filtering routes. As we wrap up our look at advanced OSPF topics, we'll discuss default routes, and compare OSPFv2 with OSPFv3.
We have seen where OSPF can automatically generate a default route when needed. This occurs with some of our special area types. For example, of you configure a totally stubby area, of course a default route is required and OSPF generates this route automatically from the ABR.
In order to increase flexibility with your designs, default routes injected into a normal area can be originated by any OSPF router. To generate a default route, you use the default-information originate command.
This command presents two options:
In our previous blog post, we examined how OSPF can automatically filter routes through the use of special areas and LSA Types. But what about your options for manually filtering routes in OSPF? In this post, we will examine techniques that you can use at various points in the topology.
One simple and effective method of filtering at the ASBR is the use of a distribute list. Here, we define the rules for route identification with an access list, and then reference this access list in the distribute list.
Figure 1 - OSPF Topology
In this example, our Area 1 is configured as a normal, non-backbone area. You can clearly see this when you examine the routing table on ORL.
Note the two prefixes (E2) of 192.168.10.0 and 192.168.20.0. Let’s filter 192.168.10.0 at the ASBR of ATL.
Note how simple this configuration is. Let’s see if it worked by examining the route table of ORL once again:
The configuration worked perfectly and...
Last time, we began our look at advanced OSPF topics with the configuration of backbone and non-backbone areas. In this blog post, we'll look at the creation of more specific area types.
Figure 1 - OSPF Topology
It is time to make our Area 1 from Figure 1 a stubby area. This is a simple configuration change. On each device in the area, we need to set the Area 1 as stub. Here is the configuration in our network:
This will cause a reset of the adjacency (as you might guess). After this change, it is time to check the OSPF route table and the OSPF database on ORL:
Just as we would hope, the routing table is smaller now! There is no longer the detail of the external prefixes from the ASBR. Instead we have a default route automatically generated by the ABR. This default route is needed, of course, because the routers in Area 1 still need to be able to access the remote prefixes (if needed).
Now it is time to examine the OSPF database. It is exactly what we would expect to...
The time has arrived to tackle some of the more advanced (and interesting) features of the Open Shortest Path First routing protocol. We begin by examining the configuration and verification of the different OSPF areas. This is an exercise that is not only fun, but it can really cement the knowledge down of how these areas function and why they exist.
Areas are a fundamental concept of OSPF. It is what makes the routing protocol, hierarchical, as we like to say.
There is a core backbone area (Area 0) that connects to normal, non-backbone areas. The backbone might also connect to special area types we will examine in detail in this chapter. This hierarchical nature of the design helps ensure the protocol is very scalable. We can easily reduce or eliminate unnecessary routing traffic flows and communications between areas if needed. Database sizes are also contained using this approach.
The Backbone and the Non-Backbone Areas
To review a bit from our previous blog...
Before we move on to more advanced topics, we'll wrap up this OSPF Basics series in Part 3. Here we'll examine LSA types, area types, and virtual links.
Link State Advertisements (LSA) are the lifeblood of an OSPF network. The flooding of these updates (and the requests for this information) allow the OSPF network to create a map of the network. This occurs with a little help from Dijkstra’s Shortest Path First Algorithm.
Not all OSPF LSAs are created equal. Here is a look at each:
The Router (Type 1) LSA - We begin with what many call the “fundamental” or “building block” Link State Advertisement. The Type 1 LSA (also known as the Router LSA) is flooded within an area. It describes the interfaces of the local router that are participating in OSPF and the neighbors the local OSPF speaker has established.
The Network (Type 2) LSA - Remember how OSPF functions on an Ethernet (broadcast) segment. It elects a Designated Router...
In the previous blog post, we looked at a few fundamental OSPF concepts, including neighbor and adjacency formation. As we continue through the basics of OSPF, this post will examine router roles, timers, and metric calculation.
A designated router (DR) is the router interface that wins an election among all routers on a multiaccess network segment such as Ethernet. A backup designated router (BDR) is the router that becomes the designated router if the current designated router has a failure on the network. The BDR is the OSPF router with the second highest priority at the time of the last election. OSPF uses the DR and BDR concept to assist with efficiencies in the operations of OSPF.
Keep in mind that a given OSPF speaker in your network can have some interfaces that are designated and others that are backup designated, and others that are non-designated. If no router is a DR or a BDR on a given...
The OpenShortest Path First (OSPF) dynamic routing protocol is one of the most beloved inventions in all of networking, widely adopted as the Interior Gateway Protocol (IGP) of choice for many networks. In this blog series, you'll be introduced first to the basic concepts of OSPF and learn about its various message types and neighbor formation.
Where does the interesting name come from when it comes to OSPF? It is from the fact that it uses Dijkstra's algorithm, also known as the shortest path first (SPF) algorithm. OSPF was developed so that the shortest path through a network was calculated based on the cost of the route. This cost value is derived from bandwidth values in the path. Therefore, OSPF undertakes route cost calculation on the basis of link-cost parameters, which you can control by manipulating the cost calculation formula.
As a link state routing protocol, OSPF maintains a link state database. This is a form of a network...
On Monday June 10, 2019 Cisco announced an unprecedented revamp of their certification program. This post dives into one of the major updates, the new CCNA certification. (We'll have a future blog post with updates on the CCNP changes.)
First, if you’re currently preparing for your CCNA R/S (or any other CCNA for that matter), don’t panic. You have until February 24, 2020 to complete your certification, at which time you’ll be given the new CCNA certification, plus a “badge” indicating your area of specialization (based on which CCNA you earned). So, Cisco recommends you “keep going” if you’re working towards any CCNA certification.
Even if you’re just thinking about going after a CCNA cert, personally, I would do it now before the February deadline hits.
However, just having a current CCENT certification won't help. You'll need a full CCNA to be granted the new CCNA certification. So, if you do just have your CCENT,...
This post is the 6th and final in a series of Border Gateway Protocol (BGP) posts. If you missed any of the first five parts, here are the links:
In this post, we're going to take a look at how we can work with BGP in IPv6.
You will recall from this chapter that BGP was constructed to support many different protocols and NLRI right out from its creation. As a result, we have robust support for such technologies as IPV6, MPLS VPNs, and more.
You will also relish in the fact that once you master the basics of BGP that we have covered in this , working with BGP in IPv6 is much more similar than it is different!
BGP is so remarkably flexible, as discussed earlier in this chapter, you can use IPv4 as the “carrier” protocol for IPv6... | <urn:uuid:03ec7714-488b-4471-b2bc-762377ff539f> | CC-MAIN-2022-40 | https://www.kwtrain.com/blog?tag=ccna+r%2Fs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00301.warc.gz | en | 0.93037 | 1,938 | 2.921875 | 3 |
How to Ensure Public Health Standards are Adhered to in Large Venues
Preparing for Reopening of Large-Scale Meetings and Events
In the very earliest days of the COVID-19 pandemic in the United States, before anyone was aware that the highly contagious and deadly virus had arrived in the US, a biotech company called Biogen held an ordinary company business meeting at Boston’s Marriott Long Wharf Hotel. Virtually no one could have anticipated that the meeting was a “super spreader” event; 100 attendees went home with the coronavirus, and a study of the genetic signatures of the virus has since linked those cases to approximately 300,000 COVID cases. (That number represents 1.6% of cases in the United States, as of mid-December, 2020; the number will definitely grow, as case numbers expand exponentially). And that Biogen meeting was a tiny event, compared to normal events such as tradeshows and conferences.
While today we all know better and corporate meetings are on hold, by early spring of 2021 the United States and other countries around the world might see a return to some normal business, thanks to the new COVID-19 vaccines. However, public health experts are cautioning that people will still have to wear masks and practice social distancing, at least for several more months.
If that is the case, organizations that host large numbers of visitors and guests should deploy systems that enable them to protect their employees and guests, and comply with those face mask, social distancing, and building occupancy mandates. The organizations who will most likely continue to be affected by these public health mandates are venues that host large gatherings for social or professional purposes: such as, hotels, conference centers, casinos, and college campuses. To ensure that they are in compliance with workplace and public health mandates, these and other similar types of organizations will need a combination of staff practices and technology solutions.
One technology that will be essential in these efforts is video surveillance, but that alone won’t be sufficient. By pairing traditional video surveillance (CCTV) technology with video analytics software, these organizations will be empowered organizations to obtain real-time information as well as trend data regarding public and workplace health safety behaviors, to remain safely operational.
How does Video Content Analysis Work?
Video intelligence software that is powered by Artificial Intelligence and Deep Learning is able to extract, classify, recognize and index objects and behaviors that are captured by video cameras. Thus, it can detect whether someone is wearing a mask, or standing at least six feet from another person. It can also detect crowd formation and count the number of persons who have entered or exited a defined space, across multiple cameras in a facility. Furthermore, the technology can also be used to conduct COVID contact tracing among their employees, they can use appearance similarity and proximity identification. And, regardless of a pandemic, video content analytics helps organizations improve security, daily operations, marketing, and planning. Keep reading for an explanation of how the technology works for each of these use cases.
Prevent Traffic Bottlenecks
Both customer service and security teams have always had a motivation to reduce crowding in event venues. Now, because social distancing is so important to reduce the spread of the coronavirus, they have another big reason to prevent crowds. In a large conference center, it is challenging for staff to be aware of crowding situations, everywhere, at all times. This is just one example of where video analytics systems are so helpful, because operators can set up real-time alerts to notify management of situations as they are evolving, such as pedestrian or vehicle traffic bottlenecks or growing queues. This enables customer service managers to respond with agility to developing situations that might impact guest safety and / or experience.
A video analytics system is able to send real-time alerts because it collects and aggregates long-term video data, which enables operators to determine benchmarks, as well as derive operational intelligence reports. After the system has been used to analyze activity over time, operators can establish normal benchmarks and create custom real-time alerts that notify operators when a normal threshold has been exceeded. For example, operators can set a real-time people-count alert that notifies them of potential crowd situations, where the number of people in a pre-defined camera view area exceeds the pre-set threshold. Upon receiving an alert, managers can then assess the situation and determine how best to break up crowds and keep traffic moving.
Comply with Occupancy Codes
Occupancy rules have existed long before coronavirus came along, to prevent excessive crowding, especially in the event of a fire or similar emergency. Security teams can leverage the software to be notified by people-count alerts when an area becomes too crowded. In the COVID era, occupancy limits are especially important to prevent crowding and facilitate social distancing. Restaurants, retail operations, and other entertainment spaces often have occupancy limitations and with video intelligence software, operators can set rules and leverage occupancy control tools to count and alert on occupancy violations. Data can also be aggregated to understand occupancy details over time and location, and make intelligent decisions for controlling occupancy more effectively over time.
Optimize Cleaning and Maintenance Operations
Even in the absence of a pandemic, guests are sensitive to facility cleanliness, so for the sake of customer satisfaction and workplace safety, property managers can leverage occupancy data to ensure a clean, sanitary environment. Rather than using traditional schedules for routine maintenance of restrooms, hallways or other spaces, managers can use video analytics alerts to trigger custom maintenance alerts that are based on actual facilities usage.
Monitor Social Distancing & Face Mask Compliance
In some venues, it may be difficult for operations and security staff to assess whether visitors are keeping a safe social distance. Some video analytics systems, however, have proximity identification, which can detect the distance between people. Operators can set six feet as a normal threshold. Although venue managers are unlikely to enforce violations in real-time, it is useful for managers to have quantifiable long-term data about where and how often their customers are violating the social distance mandate. This allows organizations to take action on individual violations (for example, if an employee is not complying with the rule), but it also allows organizations to gather long-term data about whether guests and employees are following or violating the distancing mandate; such data also provides evidence for compliance and health and safety audits. Similarly, by leveraging long-term data reports, managers can make better decisions about where and how to encourage mask-wearing. A video analytics system can be used to aggregate and report on face mask wearing statistics, forensically search for people who are wearing face masks, and send alerts if someone is not wearing a mask, and respond quickly to avert a potential problem.
Facilitate Contact Tracing
Finally, some video intelligence software may be used to conduct contact tracing among COVID-infected staff. A combination of proximity identification, appearance similarity and/or facial recognition can be used to conduct a filtered search of archived video footage across multiple cameras, to determine whether an infectious person had contact with other employees or guests, and for what duration. Managers can then advise exposed individuals to self-quarantine, while protecting the anonymity of the infected individual.
Beyond the Pandemic, Analytics Offers Many Benefits
There are many benefits of video content analysis that preceded the pandemic, and they will continue long after the pandemic. From security to marketing, property management, operations, and customer service — various departments in any organization can perform with greater efficiency and be more effective when they have quantifiable, actionable information gleaned from their video data. The data can be delivered in real-time, to improve situational awareness and response to evolving situations. For instance, security teams can receive illumination alerts when someone has turned on the lights in an office that is closed after hours. In addition, video data that is aggregated over time provides key business intelligence: Some examples include marketing managers leveraging video content analytics to gather on-site demographic data about their customers or footfall traffic data; and property managers reviewing occupancy reports to ensure compliance, as well as foot traffic heatmaps to optimize floorplans and / or how much to charge vendors for retail space. The list of possible uses is long and can be tailored to every organization’s unique business needs.
Even before the pandemic, organizations of all types have had an obligation to maintain safe environments for their employees and guests, but during the COVID-19 pandemic, they have an even greater — in some cases, legal — obligation to do so. There is no doubt that it is challenging to monitor compliance with health safety mandates, but it can be done, and one of the most effective ways is to pair existing video surveillance networks with video content analytics software.
Signup to receive a monthly blog digest. | <urn:uuid:2e26748c-1b21-4c5d-b518-96f28716ddcf> | CC-MAIN-2022-40 | https://www.briefcam.com/resources/blog/how-to-ensure-public-health-standards-are-adhered-to-in-large-venues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00301.warc.gz | en | 0.950664 | 1,809 | 2.515625 | 3 |
What is a CCD camera?
A CCD camera is a video camera that contains a charged-coupled device (CCD), which is a transistorized light sensor on an integrated circuit. In plain English, CCD devices convert or manipulate an electrical signal into some kind of output, including digital values. In cameras, CCD enables them to take in visual information and convert it into an image or video. They are, in other words, digital cameras.
This allows for the use of cameras in access control systems because images no longer need to be captured on film to be visible. Security cameras, using the techniques of CCD, can relay live visual information, which is hugely important when monitoring your facility. When combined with other security measures, these security cameras become a foolproof way to protect your space. Coupled with motion sensors or video verification, for example, CCD cameras can capture the image of cardholders who are attempting to enter a protected space.
How does a CCD camera work?
In terms of the working principle of CCD cameras, these video cameras capture an image and transfer it to the camera’s memory system to record it as electronic data. CCD cameras’ main accomplishment is the production of quality images without any distortion. Basically, the camera turns light into electricity. A CCD camera forms light sensitive elements called pixels which sit next to each other and form a particular image. CCD cameras have been in production for a long period of time and tend to have high quality pixels that produce a higher quality, low-noise image than any other camera.
How much can a CCD camera cost?
Pricing of CCD cameras depends on the physical size of the CCD. Most consumer digital cameras have a CCD around ⅙ or ⅕ of an inch. Generally, one small CCD camera runs between $30 and $50. Depending on the style of cameras, the price fluctuates. For example, dome security cameras comes out to closer to $100. Cameras that are more expensive have a CCD of ⅓ of an inch or larger. The bigger the sensor, the more light it can capture. This means that in low light settings, the camera will produce better video. If using a CCD camera for security purposes, a slightly more expensive camera may be the better option, as footage will be of better quality, especially during the evening and night.
Some of these options include cameras like the Google Nest Camera which comes out to around $200 for one camera, or around $400 or $650 for packs of 3 or 5 cameras, respectively. In professional digital video cameras, there’s typically three sensors, called 3CCD. Separate CCDs are used for capturing red, green, and blue hues. Typically, the pricing of these can cost thousands of dollars per CCD camera. | <urn:uuid:2ad86ff0-39c0-44e9-9751-235a79b752a2> | CC-MAIN-2022-40 | https://www.getkisi.com/guides/ccd-camera | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00301.warc.gz | en | 0.920986 | 591 | 3.265625 | 3 |
A student’s job is to continuously receive and assess new information. For science students, there is a greater emphasis on how you handle the information you receive for research. It’s important to know how to collect your data and understand the thought processes to master conducting research effectively. The lab notebook you use as a student is an effective tool to practice critical thinking and collecting data. Properly recording experiments and thought processes allows you to easily review information and build upon prior results. As a student, recording the learning process in your lab notebook also provides you with a study tool for final exams.
While modern technology provides us with an easier way to document information, handwritten data is important for establishing a record of work. Recording information in a written document is also useful for future endeavors like filing for a trademark, patent, or copyright actions. Whether you’re an aspiring undergraduate or a pre-med student, enhance your academic performance by exploring these three tips for maintaining a student research laboratory notebook.
Organize for Reading
It’s important to be in the habit of organizing your research lab notebook to ensure readability. A quality lab notebook is legible and easy to follow. The information documented inside your notebook must be readable for peer reviews—a critical part of collaborative research—and yourself when reviewing lab data.
Record Data Analysis
After establishing the fundamentals of the scientific method in your experimental entries, remember to document your assessment of the results regularly. Analyzing and making connections in the data you’ve gathered is key to maintaining a constructive research notebook. As a student, it’s also important to document your learning processes while improving your research skills.
Follow Ethical Standards
Following the ethical standards of a laboratory notebook is vital for a student conducting scientific research. A few fundamentals are documenting all data (good or bad), never removing pages, and documenting corrections without removing or erasing errors. Instructors usually provide guidelines for maintaining your lab notebook, but you should always practice these rules to make them stick.
As you progress in your academic career, it’s important to continuously practice the standards of maintaining data as if you are already working in a professional laboratory. When considering these tips for maintaining a student research laboratory notebook, remember the information you record isn’t limited to text entries, but can include sketches, graphs, and other visual forms of data. The research lab notebook you keep as a student contributes to your academic goals and polishes your skills for your future plans in professional research. | <urn:uuid:ba10aeda-7808-4f46-9171-8db0c90f0794> | CC-MAIN-2022-40 | https://coruzant.com/stem/tips-for-maintaining-a-student-research-laboratory-notebook/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00501.warc.gz | en | 0.916081 | 510 | 3.15625 | 3 |
We live in a digital era where businesses use, analyze, and rely on data, their processes and databases. An important part of data is its integrity because it ensures data is unchanged, undivided, and in its complete, consistent form. More importantly, data integrity means the data is trustworthy. Organizations make data-driven decisions, and if the data has been altered or changed it can have a negative impact on the business. Data integrity can also come into play when it comes to meeting different data regulations and compliance standards that are requirements in certain industries today. Overall, it is important to understand what data integrity is, the different types of integrity and how it relates to data quality and security.
What is Data Integrity?
First and foremost, data integrity is the accuracy and consistency of data that is maintained by a collection of processes, rules, or standards. Through these different rules or standards, the data maintains its accuracy and completeness. Other aspects of data integrity are that it’s readable and formatted correctly along with it being original with no duplicate data.
Data is critical to business operations, decision-making, and strategy. With its importance, consistent and complete data is necessary not only to keep its integrity but also so businesses remain compliant with industry regulations. Secure, quality data is important to maintain its overall integrity, but there are factors that can affect the data and make it inconsistent.
Factors including human error, transfer error, bugs, and viruses or compromised hardware are all risks that can influence the integrity of data. To help eliminate vulnerabilities, here are some considerations:
- Limit access to data: access to data should only be allowed for business needs, place restrictions on unauthorized access
- Take the time to validate data: making sure data is correct when its being collected or being used
- Backup data: making sure you have a copy of your data available, if data loss does occur and there is no backup, that data is irreplaceable
- Audit when data is added, changed, or deleted: keep track of changes and of who and what is being changed
- Use an error detection software: this can help detect abnormalities in data based on historical analysis
These steps should be considered when trying to maintain the integrity of any data set. If data is not complete, it can’t have any true value to an organization. These considerations should be addressed to ensure data is rational on a logical or physical level.
Types of Data Integrity
Data integrity can be categorized by two different types: physical integrity and logical integrity. Physical integrity consists of external factors that have effect on the data such as power outages, data breaches, damage caused by human operators or unexpected disasters. The physical integrity of the data can also be impacted if there are issues storing or retrieving the data. Consider maintenance problems, old storage or design flaws can all come into play. One of the best ways to combat these issues would be to consider redundant hardware or power supply.
The other factor that effects integrity is logical, this can be related to human errors or software bugs. If the logic of the data is flawed, the data will no longer makes sense. If you’re thinking about the integrity of business-critical databases, to keep the data logical you would want to maintain the rationality of your database. Unlike physical integrity there are different types of logical integrity that should be considered, especially when it comes to maintaining the integrity of a database. These are entity, referential, domain and user-defined integrity. Entity integrity refers to the data being uniquely identifiable, so each record in a table is identifiable and singular. Referential maintains the consistency between tables. Domain refers to the range of acceptable values that can be stored in a database, and lastly user-defined is implemented through a set of triggers and stored procedures.
Is data integrity different than data security and data quality?
Data integrity, data security, and data quality have distinct differences, but they are connected. Data security and data quality play an important role in accomplishing data integrity.
Data quality is whether the data is useful. It has a broader definition, meaning for data to have quality it needs to be complete, valid, unique, timely, accurate, and consistent. If the data does not meet one of these criteria than it is incomplete and likely inaccurate.
Data security on the other hand has a set of standards that needs to be followed to ensure data is protected from unauthorized access or corruption. When it comes to data security, its important consider the following CIA principles: confidentiality, integrity and availability. Data integrity is at risk if the data is not secure, as the data can be altered by unauthorized parties, changed or corrupted. Data security plays an important role in maintaining the integrity of data. As you can see these terms are similar, however different but still connected. Let’s look at an example of each to get a better understanding.
Consider a database that stores names and phone numbers for a group of people. Now, if one digit is wrong in the phone number, then when trying to call that number to reach that specific person, you would not be able to, this would be an example of poor data integrity. Now, say some person in the database changed their phone number, when calling this person, you would not be able to reach them due to the data not being updated in the system. This would be an example of poor data quality. Lastly, say access to this database was infiltrated, and the data had been changed or corrupted, this would be an example of poor data security.
What about GDPR compliance as it relates to data integrity?
The General Data Protection Regulation (GDPR) is a legal framework that sets guidelines for the collection and processing of personal information from individuals who reside in the European Union (EU). Since this regulation applies regardless of where the website is based, it must be followed by all websites that attract European visitors. To maintain GDPR compliance you need to have the appropriate measures in place to protect personal data. One of the six principles of GDPR is integrity and confidentiality, this means the maintaining the integrity of data is an important factor for meeting this regulation. If data integrity is poor, this could be in violation of this regulation.
Keeping data complete and secure
In the age of digital transformation, data is critical to every business. This is an aspect of the current IT landscape that is by far not new and is recognized across the industry. Businesses make data-driven decisions, create strategy on data trends, and forecast using data collection, so data should be reliable and trustworthy. Data integrity refers to the accuracy of the data throughout its lifecycle, whether it is valid or invalid. Data integrity has an open relationship with security and quality, as the integrity of the data could be compromised if it’s not secure, and if its lacking in quality it can’t be complete or accurate. To help preserve the integrity of data you should validate data input, remove duplicate data, ensure you are protecting the data with a backup software, and limit control and access to that data. All these safeguards can help maintain integrity and ensure the company’s data is valid and, most importantly, trustworthy.
This blog was originally written by Kirsten Stoner for Veeam Blogs. | <urn:uuid:bc0dcbf3-b5a9-4bca-96bb-6d1653b2b9bc> | CC-MAIN-2022-40 | https://staging.alphakor.com/blogs/veeam/data-integrity-is-essential/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00501.warc.gz | en | 0.937794 | 1,480 | 3.71875 | 4 |
Google has now laid out the rules for its $1 million “Little Box Challenge” competition, which was unveiled as a concept in May to encourage the development of smaller, more efficient solar power inverters that can store energy for later consumption.
The complete rules and requirements for the contest were announced by Maggie Johnson, Google’s director of education and university relations, in a July 22 post on the Google Research Blog. “Think shrink! Min it to win it! Smaller is baller! That’s what the Little Box Challenge is all about: developing a high power density inverter,” wrote Johnson. “It’s a competition presented by Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS)—not only a grand engineering challenge, but your chance to make a big impact on the future of renewables and electricity.”
The prize for the winning entry is $1 million, which organizers hope will inspire incredible innovations from academic researchers to miniaturize today’s bulky power inverters so that they can be used to harness solar power for when it is needed. The competition is only open to researchers and students at colleges and universities.
“Some recent advances may change what’s possible in power electronics,” wrote Johnson. “For example, Wide-bandgap (WBG) semiconductors—such as gallium-nitride (GaN) and silicon-carbide (SiC)”—enable higher power storage densities than conventional silicon-based devices do, but they can “run into limits on the power density of inverters.”
At the same time, inverters may have the most potential for solving solar energy storage problems today, wrote Johnson. “And because inverters are so common in household applications, we hope The Little Box Challenge may lead to improvements not only in power density, but also in reliability, efficiency, safety, and cost.”
If ways can be found to shrink inverters and make them more reliable and inexpensive, “we could see all kinds of useful applications to the electric grid, consumer devices and beyond, maybe including some we have yet to imagine,” she wrote.
Proposals and entries for the competition are due Sept. 30, 2014, and Google’s Research at Google division will provide unrestricted grant funding to academics pursuing the prize, which can be used for research equipment and to support students, wrote Johnson. The deadline for grant funding requests is also Sept. 30.
Google also announced that it will be working with a group of the WBG manufacturers to ask them to provide information to the entrants so they can get the latest technologies to work on in their research, wrote Johnson. “We hope you’ll consider entering, and please tell your colleagues, professors, students and dreamers—you can print and post these posters on your campus to spread the word.”
The competition aims to find ways to shrink solar power inverters from the size of a picnic cooler down to the size of a small laptop, according to another Google blog post on the subject by Eric Raymond of the Google Green Team. “These days, if you’re an engineer, inventor or just a tinkerer with a garage, you don’t have to look far for a juicy opportunity: there are cash prize challenges dedicated to landing on the moon, building a self-driving car, cleaning the oceans, or inventing an extra-clever robot. Today, together with the IEEE, we’re adding one more: shrinking a big box into a little box.”
The shrunken inverter will be able to “convert the energy that comes from solar, electric vehicles and wind (DC power) into something you can use in your home (AC power),” wrote Raymond. “We want to shrink it down to the size of a small laptop, roughly 1/10th of its current size. Put a little more technically, we’re looking for someone to build a kW-scale inverter with a power density greater than 50W per cubic inch. Do it best and we’ll give you a million bucks.”
Google Reveals ‘Little Box Challenge’ Rules for Contest’s $1M Prize
Whoever succeeds in building a better, smaller inverter “will help change the future of electricity,” he wrote. “A smaller inverter could help create low-cost microgrids in remote parts of the world. Or allow you to keep the lights on during a blackout via your electric car’s battery. Or enable advances we haven’t even thought of yet.”
The competition calls for registered teams to submit a technical approach and testing application for their project by July 22, 2015, and up to 18 finalists will be notified of their selection for final testing at the testing facility in October 2015, according to the rules. Those 18 entrants will be required to bring their inverters in person to a testing facility in United States by Oct. 21, 2015, for reviews and judging.
The grand prize winner is expected to be announced in January 2016, according to Google.
The idea of the Little Box Challenge was first previewed by Google in May, but few details were initially released, according to an earlier eWEEK report.
Today’s power inverters are cooler-sized boxes that are used in homes equipped with solar panels, according to Google. They convert direct current (DC) power generated by the panels to alternating current (AC) power that can be used in homes and businesses. They’re big and expensive relative to the systems they serve.
Improved inverters are needed because by 2030, roughly 80 percent of all electricity will flow through the devices and other power electronic systems, making them critically important for future electricity infrastructure and use, according to Google.
Google, which is a huge consumer of electricity for its modern data centers, offices and operations around the world, is always looking for ways of conserving energy and using renewable energy sources. The company has been making large investments in wind power for its data centers since 2010. Energy production is known to have a huge impact on Earth’s climate.
The company has a goal of powering its operations with 100 percent renewable energy in the future.
In January 2013, Google announced an investment of $200 million in a wind farm in western Texas near Amarillo, as the company continued to expand its involvement in the renewable energy marketplace. Google has also invested in the Spinning Spur Wind Project in Oldham County in the Texas Panhandle.
Other Google renewable energy investments include the Atlantic Wind Connection project, which will span 350 miles of the coast from New Jersey to Virginia to connect 6,000 megawatts of offshore wind turbines; and the Shepherds Flat project in Arlington, Ore., which is one of the world’s largest wind farms with a capacity of 845 megawatts. Shepherds Flat began operating in October 2012. | <urn:uuid:5f5c06f3-9dd9-421f-8d6d-cd1c0cefb52f> | CC-MAIN-2022-40 | https://www.eweek.com/cloud/google-reveals-little-box-challenge-rules-for-contest-s-1m-prize/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00501.warc.gz | en | 0.954764 | 1,452 | 2.640625 | 3 |
As Coronavirus COVID-19 makes its way across the world, individuals are doing their best to stay up-to-date on the latest outbreak locations and confirmed cases. Hackers have created new attacks based on the intense public interest in this virus.
One of the most common of these attacks is an email impersonation attack. In this attack, the criminal impersonates organizations like the UN World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) to trick users into opening a malicious email. Multiple government organizations have issued warnings against these attacks.
Email scams always follow the headlines
- Infect the user device and spread malware
- Steal login credentials by way of a phishing site or other phishing mechanism
- Collect donations for fake charities through malicious websites
The current pandemic has given scammers all those opportunities and more:
- Selling counterfeit versions of medical supplies that are in short supply
- Tricking users into buying fake cures
- Offering investment opportunities in companies claiming to have the cure
Email scammers will continue to find new ways to take advantage of the Coronavirus COVID-19 pandemic. If you have the proper email protection in place and you know what to watch out for, you can protect yourself from these email attacks.
Spreading the infection
There has been a real surge in the registration of new domains that use the word ‘coronavirus.’ Some of these will be put to a good use, but many will be used by hackers for malicious purposes. These malicious websites might appear to offer news or advice on coronavirus outbreak but are being used for phishing or to spread malware. Email impersonation scams often include links to this type of site.
Over the past few weeks, we have seen a number of attacks impersonating the World Health Organization. These phishing emails appear to come from WHO with information on Coronavirus COVID-19. They often use domain spoofing tactics to trick users into thinking these messages are legitimate.
These email impersonation attacks will include a link in the body of the email. Users who click on that link are taken to a newly registered phishing website.
Remote work and increased risk
As a preventative measure against the spread of Coronavirus COVID-19, many organizations are asking employees to work remotely from home until further notice. These remote workers may rely on email for communication with other employees as well as updates on workplace location and other issues related to the outbreak. This puts users in a state of expectation for email messages from HR or upper management on the subject of the virus. This expectation creates an increased risk for the company because the user is more likely to accidentally open a malicious email if they are expecting a similar legitimate message.
These factors, combined with the diminished ability to confirm the legitimacy of an email due to remote working is a perfect environment for email scams.
Protecting your organization and employees
There are several ways to protect your company and employees from email scams, and they are based on employee education and security technology:
- Don’t click on links in email from sources you do not know; they may lead to malicious websites
- Be wary of emails claiming to be from the CDC or WHO. Go directly to their websites for the latest information.
- Pay special attention to email messages from internal departments or executives who sent regular updates on the outbreak. Domain and display name spoofing are some of the most common techniques used.
- Never give personal information or login details in response to an email request. This is how a phishing attack leads to business email compromise.
- All malicious emails and attacks should be immediately reported to IT departments for investigation and remediation.
- Ensure that your organization has reliable virus, malware, and anti-phishing protection.
- Make sure employees receive up-to-date training on the latest phishing and social-engineering attacks.
Criminals are always looking for new ways to exploit the latest tragedies. Keep up on the latest scams by following alerts from CISA and similar sites.
- Secret Service Issues COVID-19 (Coronavirus) Phishing Alert
- Defending Against COVID-19 Cyber Scams
- Retailers beware: Coronavirus scams are popping up everywhere
Olesia Klevchuk is Principal Product Marketing Manager for email security at Barracuda Networks. In her role, she focuses on defining how organizations can protect themselves against advanced email threats, spear phishing and account takeover. Prior to Barracuda, Olesia worked in email security, brand protection, and IT research. | <urn:uuid:5f727b06-5a50-46d0-8c17-f01b66f81862> | CC-MAIN-2022-40 | https://blog.barracuda.com/2020/03/11/coronavirus-covid-19-fraud-companies-face-new-phishing-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00501.warc.gz | en | 0.930456 | 964 | 2.765625 | 3 |
SDN is about making networking more flexible and agile through programming network elements.
Such programming needs to be done in a standard way.
Hence, standardizing the southbound protocol that directly commands a network element to forward traffic is important. So is the northbound protocol through which different applications tell an SDN controller the WHAT and HOW of what it wants to achieve from the network.
That is, not only the app has to tell its intent (the WHAT), but the way to achieve it (the HOW).
A simple example of HOW and WHAT in normal networking would be trying to send traffic from Point A to Point B between the routers of a certain vendor.
- WHAT: Here would be the intent of sending traffic from A to B.
- HOW: Here would be to configure certain parameters (for example, using certain commands on CLI of certain vendor X) on certain transport (MPLS, optical).
It is clear that in this case, both WHAT and HOW would be needed to configure the traffic successfully.
It is apparent, further, that the user has to understand the CLI of a particular vendor to configure such a scenario. And the user has understood a different set of CLI if the networking vendor or equipment is changed.
This is not a flexible enough environment.
SDN, though, allows for abstractions that are not available in normal networking environments.
The highest level of such abstractions are achieved when an app in SDN just specifies the intent, but not the way to achieve the desired outcome.
Networking based on intent has recently become the focus of the Northbound Working Group in Open Networking Foundation (ONF) and is generally referred to as Intent-based Networking (IBN). The working group has been formed to standardize models and interfaces based on IBN. (Other standard bodies are also working on standardizing IBN in one way or another.)
In IBN, the user or application has to specify the intent only. For example:
- I need a low latency path from A to B.
- I need a bandwidth of 40 MB from Time A to B, else 100 MB.
- If Jitter increases on the link, change the route to Path X and once it becomes normal bring it back to the original path.
The SDN controller, which is intelligent enough, then takes these commands and translates them into low level infrastructure commands and actions.
Therefore, this removes the pressure off the app to understand the underlying low level infrastructure details and opens up new flexibility for app developers.
The advantages of intent-based networking
IBN is more scalable compared to non-intent based protocols. As the app developer does not need to be aware of the infrastructure environment, the flexibility to scale the app will increase tremendously. Also, introduction of new apps becomes quick when the app developer has to focus more on the applications rather than understanding how the applications work with the infrastructure.
Portable and vendor-agnostic
IBN is portable and vendor-agnostic. An app developed for one SDN environment can be easily ported to another SDN environment without the app developer having to be involved. This also means that an app developed for one SDN controller can be run on another vendor’s controller.
IBN will bring coherence to and remove conflicts from multiple apps. In the past, there was always a problem when multiple applications pushed commands to an SDN controller. There was always a risk of conflict as it was not possible to decode the low level changes that multiple apps caused in the network, leaving the controller unable to understand the intent of the applications.
It is clear that without intent-based networking, SDN would not be flexible, scalable, or portable. Without it, there will always be complications to deal with. And, in its absence, there will be a need to run SDN in a more controlled environment so that multiple apps do not interfere and create conflict with one another.
Therefore, for SDN to succeed, IBN is a must, not an option. | <urn:uuid:cda6c0e7-ece8-44e4-871a-83a08c5cd522> | CC-MAIN-2022-40 | https://logicalread.com/intent-based-networking-not-an-option-but-a-must-for-sdn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00501.warc.gz | en | 0.940023 | 837 | 2.984375 | 3 |
Pierluigi Paganini, Fabian Martin,
Many people are inclined to leave the responsibility for personal banking security entirely with the banks. However, this is not a good strategy for many reasons, including:
In this article we begin by introducing various mobile banking techniques and their security rationale. We then explore various attacks against mobile banking systems, and what you can do to minimise your risk.
Mobile banking refers to any system that enables regular banking services through a mobile phone. A popular misconception is that we need a smartphone and a bespoke application to access bank services. In the most general case, the mobile phone is simply used as a type of computer terminal to access various banking services through wireless communication services, such as Simple Message Services (SMS), USSD, Near Field Communications (NFC is similar to RFID), and 3G Data (Internet over mobile).
Some of these services (Pull Services) are explicitly requested by the customers, while others (Push Services) are sent by the bank to users under specific conditions, e.g. Alerts.
The types of service that can be securely delivered depends on the mobile phone’s features, the available channels provided by the telephony operator, the technical characteristics of the channel provided, and the desired balance between usability, reliability and speed of execution of operations. In this article we will explore four different ways mobile phones can perform banking transactions.
Some banks offer simple banking services through regular SMS. Let us imagine a user wishes to perform a Bank Account Balance Enquiry for the account ending in 981 that is associated with this registered mobile phone. In this case, the user sends an SMS with the message “A 981”, and receives an SMS with the account balance. This simple type of banking service can pose security problems for users because the account balance is transmitted in the clear, and because the account Identifier is stored in an SMS message that leaves itself open to exploitation if that user’s mobile phone is lost or stolen. So regular SMS can be useful for very simple query services, but may not be well suitable for making transactions such as money transfers, because this should also involve some form of secure authentication of the user.
In some countries, such as Africa, USSD (Unstructured Supplementary Service Data) is a popular method for providing mobile banking transactions. USSD is essentially an unauthenticated service that employs the SIM card in the phone, and the voice channel on mobile phones to exchange data with a banking server. It is frequently accessed via a predefined number like *144# that you can type in your mobile phone. The user interface is rendered as plain text on the screen of all mobile phones, even the simpler ones. USSD uses the GSM infrastructure and, technically, it is possible for an insider, working for the telephony operator, to intercept the communication when the data is travelling between the USSD gateway and the information server, and to fake transactions. To try and manage this risk, banks limit the value of transactions that can be performed over USSD. However, attacks have been known to occur.
Today, many banks are exploring the use of wireless Near Field Communication (NFC) for fast and convenient micro transaction services. NFC technologies are found in some bank smart cards, and some mobile phones. NFC banking transactions assume that proximity of the card to the Point Of Sale Device confirms intent to buy. Unfortunately, these technologies can be very easily abused. White-hat hackers (the good guys) have demonstrated that it is easy to communicate with NFC devices from “far away”. A person a few metres away can buy items using your NFC card/smart phone without your consent, and you are unlikely to have your account reimbursed.
To get the best mobile banking experience, some banks deploy user-friendly graphical applications designed specifically to run on selected mobile phones. This includes Java applications designed for simple mobile phones, or advanced applications designed for advanced smartphones like Android, iPhone and Windows Phone. Advantageously, with these types of applications, the bank provider can employ more secure communications using encrypted SMS (or encrypted Internet data) that cannot be “sniffed” by the telephony operator/attackers. In this case, the mobile banking experience can be a complete substitute for e-banking from your desktop.
As we can see, the security aspects to be considered depend on how a particular banking service has been implemented, and the ways in which it uses your mobile phone. The majority of banks in developed countries aim for reasonably good levels of security, up to the extent required by regulatory and legal requirements. Unfortunately, this is not uniformly the case (e.g. RFID/NFC enabled credit cards in America).
Of course, even when banks do a good job, you can completely undermine your security by giving out your username and password to family members or friends. In these cases you can be directly responsible for all fraudulent transactions, and rightly so.
Unfortunately, even if you keep control of your passwords, new generations of malware are now targeting the e-banking sector, and for this reason it is necessary to adopt a comprehensive banking security solution that can also be deployed to mobile phones and tablets. As discussed in our previous article, some of these new malicious agents are able to compromise user banking authentication processes on desktop computers with sophisticated techniques that are able to replace the human operator, masking their operations with mechanisms for hijacking the flow of information between banks and clients. Because it is unlikely that both your desktop computer AND your mobile phone will be compromised at the same time, the use of both devices together provides much higher assurances of security.
We explore briefly two mobile phone approaches frequently employed to prevent desktop banking attacks from succeeding:
In our opinion, given the security controls built into smartphone operating systems are now stronger than the security controls in desktop computers, at the moment, and in general, mobile banking applications on smart phones are probably more secure than regular internet banking over your web-browser.
However, as we discussed in our previous article “Smartphone Monitoring and Malware… Up close and personal…” the malware threat on mobile devices is starting to grow rapidly. You can trivially undermine mobile banking security by “Jailbreaking” your iPhones and “rooting” your Androids. Your mobile phone will then be at risk to the same type of advanced (banking) malware risks found on desktops…
However, even if you don’t jailbreak your phone, there are other risks. As regular readers of our articles know, cyber criminals are starting to exploit security vulnerabilities present in all mainstream smartphone operating systems. Today, we are observing the emergence of malware on “factory standard” smart phones designed to steal sensitive information, such as banking credentials. Some of these attacks deduce user interaction on the touchscreen by reading data from the accelerometers, or by exploiting other vulnerabilities in the smartphone operating system. However, for these rare attacks to be successful: a) they require you to have inadvertently installed some malicious applications on your phone, b) the mobile banking system that is being attacked has been designed in a very careless way and c) the attack must be very specialised against a specific bank.
Right now, it is much more likely that attacks against mobile banking in the short term will be similar to the simpler low-tech attacks against regular e-banking on your desktop. This means that you may be subject to phishing SMSs, you may receive a false e-mail with a QR-Code requiring you to install a new application or a new security feature provided by your bank, or a malware will require that you type your account number and password in order to steal it. Be Smart, and don’t fall for these simple tricks!
Today, you will likely be protected by laws that require banks to ensure there are adequate security measures in place in order to access their systems via mobile devices, even against phishing attacks. However, as happens with providers of any kind of service, some banks are more secure and have more sophisticated and comprehensive security measures in place than others. In particular, some advanced banks are beginning to actively monitor the health of the device that is accessing banking services. This means that if your device is not reliable, the bank may choose to restrict the portfolio of services you can use through it, or block your access through that device until it becomes healthy again. These monitoring systems are audited to ensure that the bank is not capturing personal private information, and that they only work to protect you.
So it’s sensible and recommenable that you keep the device you are using for banking in good health. Do not leave the responsibility of banking security only with the banks. Banking security is also your problem. After all, even if you are reimbursed in the event of fraud, resolving a security incident can be inconvenient, stressful and consume a lot of your time. Inform yourself, and if in doubt about any potentially fraudulent email or SMS, contact your bank.
Pierluigi Paganini, Security Specialist CISO Bit4ID Srl, is a CEH Certified Ethical Hacker, EC Council and Founder of
Security Affairs (http://securityaffairs.co/wordpress)
Prof. Fabian Martins, (http://br.linkedin.com/in/fabianmartinssilva) is a banking security expert and Product Development Manager
at Scopus Tecnologia, (http://www.scopus.com.br/) owned by Bradesco Group.
Ron Kelson is Vice Chair of the ICT Gozo Malta Project and CEO of Synaptic Laboratories Limited firstname.lastname@example.org .
Ben Gittins is CTO of Synaptic Laboratories Limited. email@example.com
David Pace is project manager of the ICT Gozo Malta Project and an IT consultant
Tel:: +356 7963 0221
ICT Gozo Malta is a joint collaboration between the Gozo Business Chamber and Synaptic Labs, part funded in 2011 by the Malta Government, Ministry for Gozo, Eco Gozo Project, and a prize winner in the 2012 Malta Government National Enterprise Innovation Awards. www.ictgozomalta.eu links to free cyber awareness resources for all age groups. To promote Maltese ICT, we encourage all ICT professionals to register on the ICT GM Skills Register and keep abreast of developments, both in Cybersecurity and other ICT R&D initiatives in Malta and Gozo. For further details contact David Pace at firstname.lastname@example.org . | <urn:uuid:1d1f5a9d-2a17-457a-84ce-bc39bd6964cc> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/8073/security/understanding-risks-of-different-types-of-mobile-banking-transactions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00501.warc.gz | en | 0.932337 | 2,199 | 2.5625 | 3 |
Dual combines a number and a string into a single record, such that the number representation of the record can be used for sorting and calculation purposes, while the string value can be used for display purposes.
Now when I think of the data type combining both text and a number value, I tend to think that this means a value can be the unique combination of the two. However, this is not the case.
The number in Dual is the core value, while the text is just the display value.
What does this mean?
This means a number value can only have one display value. While a text value can be the display value of multiple numbers.
I have made this mistake when trying to combine mixed granularity within dates. Say at the end of the year aclient’s finance team closes the books and closes their general ledger at the end of the year. They then report his at a year level. However, in the current year, they report at the month level.
To incorporate both sets of data in a chart I came up with the concept of doing Month-Year, with PY being a whole previous year.
Then a new requirement came and a particular budget was only at the year level. So we stuck with a similar concept, but this time CY.
Now this worked out as is. However, I had made the Month field a Dual. It took me far too long thatDual(‘PY,0) and Dual(‘CY’,0) would not provide the expected results. It would always default to PY since there can only be one display value per number.Hopefully knowing this can save you some time in the future. | <urn:uuid:505ba130-eb71-4bb6-bdea-1b1ab2410d6d> | CC-MAIN-2022-40 | https://www.bardess.com/90-dual-data-type-caveat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00501.warc.gz | en | 0.942574 | 368 | 2.921875 | 3 |
Below are some definitions of commonly used terms. How many can you correctly define?
TCP/IP - (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program.
DPI – (dots per inch) is a printing term that describes the number of dots per inch that are used to create an image, and measures quality of printed images.
FTP – (File Transfer Protocol) is a standard Internet protocol that is the easiest way to exchange files between computers on the Internet.
HTTP - (Hypertext Transfer Protocol) is the set of rules for transferring files, including text, graphic images, sound, video, and other multimedia files, on the web.
OCR - (Optical Character Recognition) involves computer software designed to translate images of typewritten text or pictures of characters, into machine-editable text.
Ping - an Internet program that verifies the existence of an IP address.
POP3 - (Post Office Protocol 3) is a client/server protocol in which email is received and held for you by the Internet server. This standard protocol is built into most email products such as Outlook Express.
Protocol - a certain set of rules used in information technology for communication.
SMB - (Server Message Block) a protocol for sharing files, printers, serial ports, and communications which allows a user to access files at a remote server.
SMTP - (Simple Mail Transfer Protocol) is a protocol for sending and receiving email messages between servers.
So, how'd you do? | <urn:uuid:ad479cb3-b95e-4dbe-af14-2670947685b8> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/116/how-many-it-terms-do-you-know-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00501.warc.gz | en | 0.895512 | 345 | 4.09375 | 4 |
What Is the Intel Core Processor Series?
The Intel Core processor series, first launched in 2008, is Intel’s main processor line, noted for its reliance on multiple processor cores. An evolution of the earlier Core Duo line of processors for desktop and mobile computers, the series represents Intel’s primary CPU focus.
It’s not the only one, however: The Core series sits between a number of low-end options intended for niche or consumer use cases (the Atom, Pentium and Celeron processor lines) and the high-end Xeon line of server and workstation processors. Most x86 computers sold today utilize some form of Core processor.
It may seem like a lot of choice, but Hernán Quijano of Intel’s Desktop and Workstation Group says that the variety serves a wide range of workloads.
“There is a reason for every product lineup we have — specifically, in our products, for the workstation segment where we serve a lot of different users across many industries,” Quijano says.
What Are the Different Types of Intel Core Processors for Desktops?
Currently, there are four primary types of Intel Core processors that business consumers may see:
- Core i3: Generally sold with two- or four-processor cores, this low-end model excels at single-threaded tasks such as web browsing and basic office software.
- Core i5: Earlier iterations of the midrange Core i5 came with four processor cores, but more recent models, such as the Rocket Lake-S line, now include six cores and 12 processor threads, making them capable options for graphic-intensive workloads and eSports.
- Core i7: Modern variations of the Core i7 come with as many as eight cores and 16 threads. Core i7s are able to “turbo boost,” providing access to additional processor power as needed. Software development and video editing are two types of tasks that could benefit from a Core i7 or better.
- Core i9: The Core i9 made a big splash in the desktop market with the release of the i9–9900K in 2018, the first i9 processor targeted at consumer platforms. Most Core i9 models have 8 cores, but the X-series line of processors offers models with as many as 18 cores.
Quijano notes that Intel’s premium desktop lines excel when maximum capabilities are the primary concern — and they can scale up beyond the Core series. Many professionals in entertainment and data sciences, he says, favor the Xeon line of processors, which in its latest iteration can have as many as 56 cores and run in multiprocessor configurations. Apple’s Mac Pro is an example of the Xeon in action.
“There is always a level of performance and expandability that can only be achieved with a powerful fixed or desktop workstation,” Quijano says. “And if you can’t take that system with you, we support technologies to access them remotely in a fast and secure way from your mobile workstation.”
Those purchasing desktop machines should determine whether integrated graphics are necessary; Core processors can be purchased with or without graphics built into the chip, which may negate the need for an additional graphics processing unit (GPU) depending on the workload. Buyers should also consider the power draw of each chip, which generally needs a larger cooling mechanism as processor line evolves.
What Are the Different Types of Intel Mobile Processors?
The mobile versions of Intel’s Core processor line also follow the i3/i5/i7/i9 conventions. But there are distinctions among them, with the primary differentiator being power draw. Intel produces three main lines of mobile processors:
- Y Series: Also known as the Core M Series, the Y Series is designed to support ultralow power consumption (below 10 watts), starting with Core m3 and ending with the Core m7. A notable model that uses Y Series processors is the Google Pixelbook Go, a Chromebook.
- U Series: Perhaps the most common chip used in modern laptops, U Series processors consume 15 watts of energy, on average, making them a good match for balancing power consumption and thinness. These processors are generally sold as i3, i5 and i7 models. The Microsoft Surface is a good example of a computer that uses a U Series processor.
- H Series: These processors draw the most power among mobile chips, providing the strongest performance available in a portable setting. H Series chips are considered the top mobile processors, with i7, i9 and Xeon processors generally sold in H Series models. The Lenovo ThinkPad P1 and the HP ZBook line of laptops are two notable examples of machines that use H Series processors.
Most modern Intel mobile processors have integrated graphics, with the recent Tiger Lake line of processors providing a significant upgrade through its Xe line of integrated GPUs. (Depending on the need, some users may want something dedicated that offers an NVIDIA GPU, however.)
Beyond laptops, Intel’s mobile line of processors are a fundamental part of many small form factor computers, such as Intel’s own Next Unit of Computing (NUC) line. | <urn:uuid:7589c3e9-0001-47f0-b17c-60c0f9aed637> | CC-MAIN-2022-40 | https://biztechmagazine.com/article/2021/08/difference-between-intel-core-i7-vs-core-i9-whats-right-you-perfcon | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00701.warc.gz | en | 0.932651 | 1,078 | 2.578125 | 3 |
Most people agree that getting a little exercise helps when dealing with stress. A new BYU study discovers exercise — particularly running — while under stress also helps protect your memory.
The study, newly published in the journal of Neurobiology of Learning and Memory, finds that running mitigates the negative impacts chronic stress has on the hippocampus, the part of the brain responsible for learning and memory.
“Exercise is a simple and cost-effective way to eliminate the negative impacts on memory of chronic stress,” said study lead author Jeff Edwards, associate professor of physiology and developmental biology at BYU.
Inside the hippocampus, memory formation and recall occur optimally when the synapses or connections between neurons are strengthened over time. That process of synaptic strengthening is called long-term potentiation (LTP). Chronic or prolonged stress weakens the synapses, which decreases LTP and ultimately impacts memory.
Edwards’ study found that when exercise co-occurs with stress, LTP levels are not decreased, but remain normal.
To learn this, Edwards carried out experiments with mice. One group of mice used running wheels over a 4-week period (averaging 5 km ran per day) while another set of mice was left sedentary.
Half of each group was then exposed to stress-inducing situations, such as walking on an elevated platform or swimming in cold water. One hour after stress induction researchers carried out electrophysiology experiments on the animals’ brains to measure the LTP.
Stressed mice who had exercised had significantly greater LTP than the stressed mice who did not run.
Edwards and his colleagues also found that stressed mice who exercised performed just as well as non-stressed mice who exercised on a maze-running experiment testing their memory. Additionally, Edwards found exercising mice made significantly fewer memory errors in the maze than the sedentary mice.
The findings reveal exercise is a viable method to protect learning and memory mechanisms from the negative cognitive impacts of chronic stress on the brain.
“The ideal situation for improving learning and memory would be to experience no stress and to exercise,” Edwards said.
“Of course, we can’t always control stress in our lives, but we can control how much we exercise. It’s empowering to know that we can combat the negative impacts of stress on our brains just by getting out and running.”
Funding: The research was funded by the National Institutes of Health. | <urn:uuid:d680e1bd-a591-4c2e-8de0-a8e6b8dc9006> | CC-MAIN-2022-40 | https://debuglies.com/2018/02/17/running-helps-brain-stave-off-effects-of-chronic-stress/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00701.warc.gz | en | 0.953345 | 503 | 3.09375 | 3 |
If you own an Android device, your phone could soon be used against you. Research released in late September introduced a new tool that targets Android devices by taking control of the camera to surreptitiously snap photos that can be used to build 3D images of a user’s environment. This evolution of mobile malware could be used to facilitate burglaries and espionage, not to mention to violate users’ personal privacy.
The work, conducted by researchers from the Naval Surface Warfare Center and Indiana University’s School of Informatics and Computing, introduced this new type of malware, known as PlaceRaider. While mobile malware has largely been restricted to Trojans that target sensitive information stored on mobile devices, this new breed ups the ante of previously high-end mobile attack tools that could remotely turn on the microphone of a device to record and monitor a user’s conversations.
“Remote burglars can thus download the physical space, study the environment carefully and steal virtual objects from the environment such as financial documents, information on computer monitors and personally identifiable information,” the researchers wrote in their paper.
What makes PlaceRaider so insidious is that it would only require a user to inadvertently download a malicious camera app for it to work. From there it would rely on the fact that most users typically disregard permission warnings to grant the app the access it needs to do its work. Those permissions include the ability to access the camera, write to external storage and connect to the Internet – permissions that most camera apps already require and thus are unlikely to alarm a user.
The harmful app would also disable the audible shutter sound that cameras typically make when a photos is taken and would also deactivate the photo-preview feature, thereby eliminating two obvious hints that the camera was at work without the user’s direction.
What’s more, PlaceRaider also gains access to data from a mobile device’s accelerometer, gyroscope and magnetometer, data that would give an attacker orientation readings for each piece of data.
The entire attack can be automated: The app runs in the background, the camera can be programmed to snap photos at desired intervals, and computer algorithms can be used to determine what information collected is relevant and what is not. This means that mass quantities of sensitive information can be collected and sifted through at a rapid rate.
While this is a potentially troubling development for consumers, PlaceRaider also could be used as a surveillance tool that endangers military bases and sensitive business environments.
Perhaps the lone shred of good news to come from this report is that of a potential solution: The researchers wrote that operating platforms could be adjusted to allow images to be captured only with the physical push of a button. | <urn:uuid:3d0cf08d-882c-49d3-801e-61b07cde9c3c> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/malicious-android-app-can-turn-devices-into-secret-surveillance-tools/377/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00701.warc.gz | en | 0.948659 | 558 | 2.65625 | 3 |
In recent years, project management has become more and more important. The reason is that very often for modern enterprises it is vital to respond quickly and flexibly to new requirements.
What is Project Management?
When a business has to address a completely new task it is usually necessary that specialists from very different areas and corporate sectors work together. Here the definition of a project would make sense: It ensures a higher degree of flexibility as it goes beyond day-to-day operations. Project work provides the benefit of finding common solutions rapidly and effectively.
On the other hand, success of the project is not guaranteed, since its content is always new, contrary to business operations which are repetitive and have long been practiced in the company. Even when several approaches have been developed to achieve the project goal, the result is often incalculable.
For these reasons, it is essential to have access to reliable project planning, concrete commitment of the necessary resources and, most important, clear rules for the project work (see: templates, checklists and tips for project management).
All these activities are summarized under the term "Project management":
Project management encompasses organization, planning, control and monitoring of all tasks and resources required to achieve the defined project goals and objectives.
Project management is a managerial task and must be distinguished from the practical tasks of the operational project work.
What is a Project?
Frequently, a task is misleadingly classified as a "project". As a rule, the following general conditions should be fulfilled for tasks to be completed in the form of a project:
- is unique, i.e. no routine task to be completed as part of the day-to-day operational business,
- is a novelty,
- is subject to constraints by timescale, funding and staff,
- has a complex technical and organizational structure,
- has clear performance targets with regard to the agreed specification and quality,
- is implemented in teamwork, generally by cross-disciplinary and cross-hierarchy project teams.
A project can be considered "successful" if the output as defined in the project order has been delivered within the scheduled time frame and budgets and with the planned resources.
But careful: Due to the fact that a project is unique and complex, it also bears the risk of not yielding the desired results.
However, the risk of failing can be considerably reduced through rigorous project management ensuring that
- the various people involved act in a co-ordinated way,
- the complexity of the tasks is reduced through structuring,
- the project contents are subdivided into meaningful units to ensure clarity and ease of handling,
- the goals are achieved and
- neither deadlines nor financial limits are exceeded.
Ultimately, very different factors contribute to the success of a project. These factors range from technical issues to organizational agreements and interpersonal aspects.
So as we can see from the above, a project cannot be executed as part of the usual day-to-day business, but has its own rules.
An appropriate project organization helps minimize frictional losses and delays in the project.
It serves to
- assign authority, tasks and responsibilities clearly,
- manage the co-operation, communication and co-ordination of all people involved in the project,
- ensure rapid response to changes in the general conditions or the project goals and objectives.
Organizational Structure (Definition of Project Tasks and Responsibilities)
When it comes to multidisciplinary and multi-department teams working together on individual tasks for a limited period of time, it is essential that the organizational structures in the company are flexible, e.g. to allow direct co-operation between specialists. The inclusion of people from different areas of expertise, organizational structures and hierarchical levels raises the issue of clearly defined project responsibilities. Moreover, a rapid flow of information and close communication is required.
When working together on the successful completion of a project, it is therefore necessary to determine right from the beginning who will participate in the project. It must be defined what their respective functions, responsibilities and competencies will be. Likewise, the information flow between the members of the project team must be determined.
Communication within the project team is often taken for granted; people tend to assume that this will function more or less automatically. However, guidelines may be helpful, here too, and may in the long run help to considerably reduce the amount of time which the members of the project team invest in the project.
It has proved useful to define distinct tasks and responsibilities, i.e. project roles in an organizational structure:
- The executive is the person or group of persons who allocates funding to the project. He is the key decision-maker and is ultimately responsible for the success of the project.
- The project board (often referred to as Steering Committee) represents the executive and, as the highest-level body, is responsible for providing guidance on the overall strategic direction of the project. It makes the most important decisions with regard to the goal and the scope of the project.
- The project team is in charge of the technical project work.
Operational Structure (Planning of the Project Structure)
It is proven practice to break down the project cycle into different stages. It is a planning approach that provides the possibility of obtaining measurable interim results during project execution, thus lowering the project risks considerably. Moreover, it is much easier to discuss and decide on the course to be set for the project if data regarding milestone results and deviations from plans are available.
The aim of stage planning is to make the project progress transparent.
Project Stages: Objectives and Tasks
|Operational organization of project processes |
Project stages: objectives and tasks
Project Management Standards
The following three project management standards have been accepted worldwide:
- Project Management Body of Knowledge (PMBOK), Project Management Institute (PMI),
- PRojects IN Controlled Environments (PRINCE2), a standard which, like ITIL, was developed by the Office of Government Commerce (OGC)
- the IPMA Competence Baseline (ICB) of the International Project Management Association (IPMA).
John S Stewart. "Quick Guide to PRINCE2®". -- Blog IBPI (The International Best Practice Institute) www.ibpi.org, February 22, 2013. Retrieved February 27, 2013.
PRINCE2 (PRojects IN Controlled Environments). -- Wikipedia. Retrieved April 09, 2022.
IT Process Wiki: Project Management Templates
→ Proceed to: Templates, checklists and tips for project management | <urn:uuid:a35735b6-b110-4c46-8a95-9802e264c4cc> | CC-MAIN-2022-40 | https://wiki.en.it-processmaps.com/index.php/Project_Management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00701.warc.gz | en | 0.937028 | 1,420 | 2.703125 | 3 |
In order to access the organizational resources and sensitive information, the attackers’ primary target is to obtain the privileged user credentials. Privileged user accounts are the accounts of users with managerial rights or root privileges and the accounts with upgraded privileges. Efficient privileged user monitoring plays an important role for the organizations in protecting their critical assets. In addition, it assists in meeting the compliance requirements, and decreasing the number of both insider and external threats.
By means of correlation processes and Behavior Analysis, the user can be tagged as Attacker, Victim, and Suspicious.
Total number of incidents formed on privileged user accounts within time are shown on dashboards. This report indicates the normal privileged account usage method and defines the extraordinary or unexpected activities.
How many times the privileged accounts used on the dashboard within a certain period were used to log in is seen.
Momentary images of users are provided on the dashboard. On this dashboard, there are credential data panels that include the account names, account categories, departments, and other relevant information.
In order to obtain more information on the activities of privileged users, correlation definitions can be formed with the aim of detecting the critical actions. For instance; if a user tries to verify the credentials on an application from more than one host computers at the same time, a correlation search reporting the access can be created.
You can monitor a privileged user uploading a large file on a domain with “x.xxx”. Correlation searches can be created by using the access and credential information.
The results are shared with relevant IT managers, and e-mail & SMS alert mechanisms are formed.
It is well known that the signature-based antivirus technologies have lost their productivity as the primary weapon in the fight against malware.
Improvement of digital threats oblige you to have qualified analysts in your security team. Threat detection needs human intuition to decrease the possibility of an unnoticed attack. | <urn:uuid:d0233094-1f2b-4ba8-aeab-fe80524a28f3> | CC-MAIN-2022-40 | https://www.logsign.com/siem-use-cases/monitoring-and-managing-the-highly-privileged-user-account/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00701.warc.gz | en | 0.936572 | 387 | 2.5625 | 3 |
Cyber attackers are continuously creating new and sophisticated tactics to carry out cybercrimes against organizations of all sizes. While the tactics themselves are becoming more advanced, cybercriminals continue to use one of the simplest technologies to deploy these attacks: Email. Phishing emails remain the #1 threat vehicle that results in a cyber breach, attributing to an average loss of $17,700 every minute. In the 2020 Verizon Data Breach Investigations Report, they found that 94% of malware is delivered via email.
What is Phishing?
Let’s briefly go over the basics of “phishing” emails. Phishing is a cybercrime in which a target or targets are contacted by email, telephone, or text message by someone posing as a legitimate institution. Interacting with a phishing email can lead to infecting your computer and/or network with things like malware or ransomware. Other attacks aim to steal login credentials, personal information, or money.
There are different types of phishing depending on the target of the attack and the tactics used. Spear phishing is a campaign that a cyber attacker purposefully built to penetrate one organization and where they will really research names and roles within a company. While regular phishing campaigns go after large numbers of relatively low-yield targets, spear-phishing aims at specific targets using specialty emails crafted to their intended victim.
A whaling attack goes after senior players within an organization. Whaling attack emails are highly customized and personalized, and they often incorporate the target’s name, job title, or other relevant information gleaned from a variety of sources. This level of personalization makes it difficult to detect a whaling attack.
The Impact of a Phish
Cyber breaches can be disastrous to a company of any size, and it just takes one incident to cause an organization-wide breach. The average cost of a data breach in 2020 was $3.86 million. The costs include numerous indirect expenses such as operational downtime, staff and resource allocation to fix compromised systems and devices, and potential lost opportunities or customer churn caused by a soured reputation.
Phishing is Preventable
With proper training, all employees of an organization, including senior staff, can become aware of how to spot a phishing attempt and how to handle it once identified. Phishing emails are becoming increasingly sophisticated and more challenging to spot, so regularly sending phishing simulation emails coupled with spot-training if someone clicks on a link or opens an attachment.
Phishing simulation emails sent to employees are designed to mimic real-life phishing attacks in execution and style. These simulated attacks help guard your business against social-engineering threats by training your employees to identify and report them. Regular, but randomly sent phishing simulation emails help protect employees from falling victim to an actual phishing attack by keeping them alert and knowing what to be on the lookout for.
Want to know more about how to spot a phish? Check out this presentation How to Spot a Phish: Tips to Spoil Advanced Phishing Attempts. Defendify’s award-winning cybersecurity trainer and success manager, Shanna Utgard, will walk you through current phishing trends, their impact on organizations of all sizes, and ways you and your team can detect them.
Blog: A Complete Guide to the CEO Fraud Business Email Compromise Phenomenon
Blog: Social Engineering Training for Employees: The Framework
Blog: Looking Ahead to Social Engineering Trends of 2022
Blog: Fight the Phish: How to Identify and Handle Phishing Attempts
Webinar: How to Spot a Phish: Tips to Spoil Advanced Phishing Attempts
Resources & insights
Protect and defend with multiple layers of cybersecurity
Faster. Smarter. Stronger. | <urn:uuid:ee840241-7276-4f1f-95d7-bbe70ae71238> | CC-MAIN-2022-40 | https://www.defendify.com/blog/catch-a-phish-before-it-catches-you/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00701.warc.gz | en | 0.934172 | 771 | 3.140625 | 3 |
Today’s post on IT security from cyber attacks is shared courtesy of Todd Pouliot of Gateway Financial, LLC:
25% of Americans were cyberhacked between March 2014 and March 2015. The American Institute of CPAs announced that alarming discovery in April, publishing the results of a survey conducted by Harris Poll. Disturbing? Certainly, but the instances of pre-retirees being victimized were even greater – 34% of adults aged 55-64 reported having their data stolen or compromised within that period.
Small businesses are also commonly victimized. While identity theft has eroded consumer and employee trust in Target, Sony, Home Depot, Anthem and Wells Fargo, they will survive; a small business with limited IT resources may not. Symantec says that 30% of all targeted cyberattacks occur against firms employing fewer than 250 workers. The National Cyber Security Alliance says that the average small business that gets hacked has a 60% chance of closing its doors within six months.
Hackers will not put your household out of business, but they can steal the assets within your checking account or your workplace retirement plan in seconds. They can also take your Social Security number, email address, annual income data and more and sell it or retain it to hurt you in the future.
Cyberattacks within the financial world are especially frightening. Bank and brokerage accounts are respectively insured by the FDIC and SIPC, yet that insurance only protects a customer or client in cases of institutional failure. It does not cover cybertheft.
How can you strengthen your online defenses against cyberthieves? One way to do that is through two-factor authentication, or 2FA.
Corporations are starting to realize the vulnerability of a username-password combination. Given that so many usernames are derivations of real names, and given that many passwords are still mentally convenient, a hacker can access such accounts with relative ease.
If a company installs another security factor beyond the username-password combination – such as a voiceprint audio I.D. or a one-time numeric code texted to your phone to permit account access – hacking an account becomes much harder. This two-factor authentication may become the norm in the near future.
Too many Americans use simple passwords, sometimes at multiple websites. (Did you know that “password” is one of the most commonly used passwords?) Fortunately, free software has emerged to generate random passwords for different accounts. High net worth households are discoveringNorton Identity Safe, RoboForm, LastPass, Dashlane and other apps capable of creating super-strong passwords.
Aside from using stronger passwords, avoid falling prey to the classic mistakes. When you use free Wi-Fi at a coffeeshop or airport or make a bid at an online auction site of questionable origin, you are taking your chances. The same goes for opening mystery email attachments and sharing private data on websites lacking the HTTPS protocol.
Will cybersecurity improve in the coming years? A widely adopted 2FA standard may make online theft much harder to pull off. Other defenses are being touted, some with more merit than others. Using a fingerprint as a password sounds good, but has a crippling drawback: you can change a password, but try changing your fingerprint. Some consumers are getting new EMV-equipped credit and debit cards that rely on microchips rather than magnetic strips; many of these are not the chip-and-PIN cards common to Europe, however. Instead, they are chip-and-signature cards. The second security factor is simply you signing your name. Cybersecurity analysts believe that while the chip-and-signature cards are better than the old technology, they fall short of chip-and-PIN cards.
True cybersecurity may prove elusive, but personal vigilance and password management software are good steps toward building a better defense against cyberattacks.
1 – aicpa.org/Press/PressReleases/
2015/Pages/AICPA-Survey-One- in-four-Americans-Victimized- by-Information-Security- Breaches.aspx [4/21/15]
2 – wscpa.org/more/news/article/
wscpa-blog/2015/04/23/think- you-are-too-small-to-be-a- target-of-cyber-crime-think- again-?Site=WSCPA#.VVExkpMsDCo [4/23/13]
4 – businessinsider.com/9-things-
youre-doing-that-make-you-a- perfect-target-for-hackers- 2015-5?op=1 [5/6/15] | <urn:uuid:3deaa652-3dc3-4d4a-93c2-6fd12e8d9bd7> | CC-MAIN-2022-40 | https://greatlakescomputer.com/blog/protecting-yourself-against-cyberattacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00101.warc.gz | en | 0.938525 | 972 | 2.546875 | 3 |
Tuesday, September 27, 2022
Published 6 Months Ago on Tuesday, Mar 22 2022 By Ahmad El Hajj
Contrary to general belief, cyberattacks were much older than the internet itself. The first incident, which can be qualified as a cyberattack, took place nearly two centuries ago when a pair of thieves hacked France’s telegraph wireline communications system and stole some important financial information. Since then, the evolution of telecommunication systems has been accompanied by a similar evolutionary trend, if not faster, in cyber attacks.
The two world wars and the long cold war between the U.S. and the Soviet Union (URSS) led to more diversified attacks to compromise the defense mechanisms of the opponents. The advent of the internet in 1994, and the shift from analog to digital communications, has given cybersecurity the definition we currently use, that is, “the prevention of damage to, unauthorized use of, exploitation of, and—if needed—the restoration of electronic information and communications systems, and the information they contain, in order to strengthen the confidentiality, integrity, and availability of these systems.” as defined by the U.S. national institute of standards and technology (NIST).
It is very evident that any tension between countries or groups or even larger scale military operations is accompanied by numerous cyberattacks. Although not as effective as armed interventions, they normally aim at disrupting the process of governmental organizations, media, telecom, and power distribution systems, among others. The recent war between Russia and Ukraine contributed to the resurfacing of cyberwarfare with all its known goals and mechanisms. Silent, easier to hide, less costly, and potentially more harmful, cyberattacks are the weapon of choice of ordinary tech-savvy citizens who cannot help on the battlefield.
A Long History of Mutual Attacks
Cyberattacks between the neighboring countries are not something new. The post-soviet era has witnessed a long history of mutual attacks that will certainly not end during the current war.
Russian attacks have been prominently more powerful in disrupting the main vital sectors in Ukraine. Among others is the Ukrainian power grid infrastructure attack in 2015 using the Trojan virus BlackEnergy, which resulted in widespread outages. A broader attack was performed in 2017 using the malware NotPetya. Exploiting some vulnerabilities in Microsoft Windows-based systems, the ransomware attack would seize control of the infected devices, which is only relinquished if a specific payment is made in cryptocurrency, namely, Bitcoin. The attack affected various entities, including governmental, financial, and commercial institutions. The effect of the malware even crossed the borders to affect major companies such as pharmaceutical giant Merck & Co. and global shipping company Maersk, among many others.
During the ongoing war, attacks persisted, but surprisingly, with lower severity and significantly lower magnitude. Most of the attacks belonged to the group of distributed denial of service attacks (DDoS) affecting media and governmental institutions. Most recently, more severe wiper attacks, usually aiming at wiping data from the victim’s servers, were observed mainly using the HermeticWiper and Whisper Gate malware.
The low-key cyber-invasion of the Russians has even raised some eyebrows, especially since Russia hosts some of the most dangerous hacking groups, such as the Conti ransomware operators. It is widely believed that this is due to the fact the military invasion necessitates the use of Ukrainian infrastructure such as the telecom network for their logistic operations during their invasion of the neighboring territory.
This said, as the war lasts longer, the sanctions towards Russia are increasing, and the Russian economy is notably crumbling. Cyberattacks can spiral out of control and affect vital and typically sidelined sectors such as healthcare.
On the opposite side, with a significantly weaker military power, Ukrainian cyberattacks towards Russia have been on the rise. The well-known Anonymous group has initiated several attacks, mainly of the DDoS type. They even seized T.V. services for some time to raise awareness of the impact of the war on Ukrainians. The Vice Prime minister has called for the establishment of the I.T. Army to lead the cyberwar against Russia. The newly formed group has targeted Russian government websites, including the Kremlin and the Duma,
A Larger Scale to the War?
The ongoing cyberwarfare is nothing but a continuation of a “Game of Thrones” between the planet’s superpowers. Ukraine has been trying desperately to get out of the Russian foster home and move closer to Western countries over the last years. It has been trying to free itself from the subordination which lasted for years. The ties of the country with other countries have given the local cyberwar a global status. If Russia’s allies, namely China and Iran, happen to enter the equation, the virtual altercation will even have a higher magnitude.
The relatively small-scale Russian attacks form a clear indirect message to the Western countries supporting the Ukrainian cause and are led by the U.S. and the European Union. A warning that any intervention in its controlled territory can potentially have some severe consequences and that its cyber army would be able to disrupt or even control any of its infrastructures.
This insight is further corroborated by a very recent history of allegedly Russian attacks on U.S. companies SolarWind Corp, Microsoft, FireEye Inc., and CrowdStrike Holdings. The main culprit is, again, Microsoft (in particular, some of its associated resellers). The attacks are attributed to the Office 365 emails system and integration issues with its Azure platform. The resulting security breach affected several federal agencies and institutions.
Another reason for the cyber-planetary war comes from backing Ukraine is receiving from Western tech companies to ensure proper and powerful cyber-defense mechanisms. Therefore, the Russian attacks can be seen as a statement of authority and superior power in the digital world in general and cybersecurity in particular.
At the scale of Ukraine, these skirmishes could be a warmup for a more global cyber warfare that hasn’t started properly yet. In conjunction with its progress on the ground, Russia is leading a tactical war in the virtual world. Concurrently, Ukraine is also leading the resistance on the battlefield and, more recently, in cyberspace. The two sides in the crisis are supported by continuous help in the military and technological fields from their respective allies. However, several questions remain widely open: When will the war of attrition extend to cyberspace? And What role would the allies from both sides have on its outcome? Time will undoubtedly tell.
The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:db467e8f-26ed-4322-bf4e-5b0ab1752c36> | CC-MAIN-2022-40 | https://insidetelecom.com/rising-cybersecurity-tactics-from-the-russian-ukrainian-crisis-a-tactical-prelude-to-a-global-war/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00101.warc.gz | en | 0.955633 | 1,446 | 3.203125 | 3 |
Internet of Things (IoT) - Old Wine in New bottle
The idea is that not only your computer and your smartphone can talk to each other, but also all the things around you. From connected homes and cities to connected cars and machines to devices that track an individual’s behaviour and use the data collected for new kind of services.
The term Internet of Things is 16 years old. But the actual idea of connected devices had been around longer, at least since the 70s. Back then, the idea was often called “embedded internet” or “pervasive computing”. But the actual term “Internet of Things” was coined by Kevin Ashton in 1999.
While the Internet of Things is by far the most popular term to describe the phenomenon of a connected world, there are similar concepts that deserve some attention. Most of these concepts are similar in meaning but they all have slightly different definitions.
M2M - The term Machine to Machine (M2M) has been in use for more than a decade, and is well-known in the Telecoms sector. M2M communication had initially been a one-to-one connection, linking one machine to another. But today’s explosion of mobile connectivity means that data can now be more easily transmitted, via a system of IP networks, to a much wider range
Industrial Internet (of Things) -The term industrial internet goes beyond M2M since it not only focuses on connections between machines but also includes human interfaces.
Web of Things - The Web of Things is much narrower in scope as the other concepts as it solely focuses on software architecture.
Industry 4.0 - It has the largest scope of all the concepts. Industry 4.0 describes a set of concepts to drive the next industrial revolution. It includes all kinds of connectivity concepts but also goes further to include real changes to the physical world around us such as 3D-printing technologies, new augmented reality hardware, robotics, and advanced materials.
To the public, IoT currently appears to be a mixture of smart home applications, wearables and an industrial IoT component. But actually it has the potential to have a much wider reach. When the connected world becomes reality, the Internet of Things will transform nearly all major segments – from homes to hospitals and from cars to cities.
Most of these segments carry the name “smart” like Smart Home or “connected” like Connected Health. Today’s major applications include:
Smart Home or “Home automation” describes the connectivity inside our homes. It includes thermostats, smoke detectors, light bulbs, appliances, entertainment systems, windows, door locks, and much more.
Wearables - Whether it is the Jawbone Up, the Fitbit Flex, or the Apple Smartwatch – wearables make up a large part of the consumer facing Internet of Things applications.
Smart City - Smart city spans a wide variety of use cases, from traffic management to water distribution, to waste management, urban security and environmental monitoring. Smart City solutions promise to alleviate real pains of people living in cities these days. Like solving traffic congestion problems, reducing noise and pollution and helping to make cities safer.
Smart grids - A future smart grid promises to use information about the behaviours of electricity suppliers and consumers in an automated fashion to improve the efficiency, reliability, and economics of electricity.
Connected car - Whether it is self -driving or just driver-assisted, connectivity with other cars, mapping services, or traffic control will play a part in next generation cars. The in-car entertainment systems and remote monitoring are also interesting concepts to watch.
Connected Health - (Digital health/ Telehealth/Telemedicine) the concept of a connected health care system and smart medical devices bears enormous potential, not just for companies also for the well-being of people in general: New kinds of real time health monitoring and improved medical decision-making based on large sets of patient data are some of the envisioned benefits.
The Internet of Things is also expected to change business models in banking, insurance, and government, farming, etc. These use cases, however, are not yet as advanced as the business cases listed above.
As exciting as the Internet of Things world may be - from the promise of autonomous cars to the robotic butler in every home—there are still some serious technical challenges that organizations experimenting in the IoT must be aware of.
The protocols of interconnectedness are still evolving, and the industry is miles away from a standard. With so many companies working on different products, technologies and platforms, making all these devices communicate with each other is no small feat — seamless overall compatibility likely won’t happen.
Several groups are working to create an open standard that would allow interoperability among the various products.While their end goal is the same, there are some differences to overcome. Some devices require a fast and efficient protocol where reliability isn’t important, while others prize reliability over speed. The flux state of IoT protocols has created new challenges for ensuring the security of the devices that rely on them and dealing with massive amount data generated by army of interconnected devices. | <urn:uuid:0bc68d8b-80b3-4557-a7b0-8f3e88307cce> | CC-MAIN-2022-40 | https://home-automation.ciotechoutlook.com/cioviewpoint/internet-of-things-iot-old-wine-in-new-bottle-nid-1828-cid-125.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00101.warc.gz | en | 0.944659 | 1,063 | 3.25 | 3 |
The next-generation of wireless technology is widely known to be 5G services. It’s a product that is going to be widely used, and many companies are working hard to get their piece of the pie while they can. Even DISH Network is expected to utilize their massive spectrum holdings to create one of the first 5G networks. However, it’s important to remember that new technology always comes with its own unique set of problems.
The last few years have been record-breaking in terms of natural disasters. Climate change scientists predict that this trend will only continue in the future. Knowing that, storm forecasting has become a critical skill. Unfortunately, 5G networks could prove to be a big problem for the storm forecasting industry.
If a 5G network is created in the United States at the current proposed power levels, experts believe that satellites will have trouble reading the water vapor emissions that they use to predict storm severity and movement. This could lead to botched evacuation orders, lack of warning and decreased warning time. All of this would make natural disaster all the more dangerous for the average citizen.
According to Neil Jacobs, head of the National Oceanic and Atmospheric Administration, a 5G network would set forecast accuracy back to what it was in 1980. In a recent testimony that Jacobs gave to Congress he remarked that a 5G network “…would result in the reduction of hurricane track forecast lead time by roughly two to three days.”
What’s even more alarming is that this isn’t just conjecture, these experts have already started running tests. In one test designed to mimic 5G interference, Hurricane Sandy was incorrectly predicted to head back out to sea. If that had actually happened, thousands more people would have been put in danger.
The U.S. government seems to be divided on the issue. The Navy and Commerce Secretary have both voiced concerns, while the FCC had steadily moved forward with auctioning off spectrum for 5G. DISH Network, is one of the largest holders of that spectrum.
Where do we go from here? That remains to be seen. Some lawmakers are already calling for a stay on activating a 5G network. “We write with a straight-forward request: Don’t allow wireless companies to operate in a 24 GHz band until vital weather forecasting operations are protected,” Senator Ron Wyden, of Oregon, and Senator Maria Cantwell, of Washington, said in a letter.
FCC Chairman Ajit Pai has a different view. “The commission’s decisions with respect to spectrum have been and will continue to be based on sound engineering rather than exaggerated and unverified last-minute assertions,” said Pai in a direct reply to Johnson’s letter.
The truth is there is a lot of money at stake, as there always is with new technology. Everyone wants a faster wireless connection. The question now is at what cost. | <urn:uuid:96281b5a-a9f7-423a-bd27-e1c884bcbdbb> | CC-MAIN-2022-40 | https://deal.godish.com/is-5g-dangerous/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00101.warc.gz | en | 0.967016 | 625 | 2.765625 | 3 |
Google has pledged to source renewable energy for its data centers and offices around the clock by 2030.
It currently sources 100 percent renewable energy for its facilities by purchasing the equivalent of the energy used in renewable power on an annual basis. But on an hour-by-hour basis, Google has to rely on fossil fuel sources when the sun isn't shining or the wind isn't blowing. This means that, on average, the company only runs directly on renewable power 65 percent of the time.
Because climate change is getting worse
"With the goal to reach 24/7 carbon-free energy by 2030, we can demonstrate that a carbon-free economy is possible," Urs Hölzle, SVP of technical infrastructure, said.
"Our data centers are large power consumers, and if we can achieve 24/7 carbon-free energy for our data center fleet, economically, we can demonstrate that carbon-free electricity grids are within reach."
Alphabet and Google CEO Sundar Pichai called the move "our biggest sustainability moonshot yet, with enormous practical and technical complexity.
"We are the first major company that's set out to do this, and we aim to be the first to achieve it."
He said that currently 24/7 carbon-free electricity is mostly unachievable, but that trends in technology and the right government policies meant that it will soon be within reach.
"To get there, Google will invest in approaches that make it possible for us to source reliable carbon-free energy in all locations, at all times of day," Pichai said.
"We’ll do things like pairing wind and solar power sources together, and increasing our use of battery storage. And we’re working on ways to apply AI to optimize our electricity demand and forecasting. These efforts will help create 12,000 jobs by 2025."
Additionally, the company said that as of this week it had eliminated all its emissions before it became carbon neutral in 2007 through the purchase of carbon offsets. Those eight years of carbon debt are thought to be less than one year of Google's current emissions, due to its phenomenal growth.
It also planned to "enable" 5GW of new carbon-free energy across its key manufacturing regions by 2030 through investment, and help 500 cities and local governments globally reduce a total of 1 gigaton of carbon emissions annually by 2030 (Google will provide data aggregation tools and similar products, cities will still have to do the heavy lifting). | <urn:uuid:706e45c2-9ea8-4ac5-8140-0ef6c8482247> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/google-plans-move-247-carbon-free-energy-data-centers-and-offices-2030/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00101.warc.gz | en | 0.957668 | 506 | 2.984375 | 3 |
In the modern digital world, information and data are money — and organisations are facing a continuous bombardment from criminal groups and/or individuals. Worse, cyberthreats are only growing in type and number.
How then, can an organisation defend against cyber attacks? What can business leaders do to shore up the defences? And how can employees contribute in cyber defence?
Types of Cyber Attacks
From an attacker’s perspective, there are two possible initial steps to gaining access to an organisation. They can try to find a technical vulnerability which is already directly exposed, or they can convince someone who already has access to take some action that will help them.
Many attacks are a combination of these techniques, but the vast majority at least starts with some form of social engineering — the overall term used for convincing someone to take some action.
That action could be as simple as clicking on a link in an email or opening an attachment. This type of cyber attack, referred to as phishing, can lead to fraud or identity theft, business email compromise (also known as BEC), or to various technical attacks.
The Defence Against Cyber Attacks
There are some technical measures that can reduce the likelihood of this type of attack. Anti-malware gateways, proxy systems, anti-spam and anti-phishing solutions can all reduce the likelihood of phishing attempts reaching end users. Endpoint anti-malware software can reduce the likelihood of malware disrupting an organisation.
But no technical control can be perfect — the primary first line of defence is human.
So how do we make this defence as strong as possible? The answer: training. But how can an organisation implement a cyber awareness training program that is both efficient and effective?
KnowBe4 Service: A Cybersecurity Awareness Training Solution
When experteq went looking for a cyber awareness training solution to offer to our clients we quickly found that KnowBe4 was the clear leader in this space, recognised by Gartner, Forrester and many others.
Using KnowBe4 is effective, as it integrates a broad range of training material with active phishing tests. And it’s efficient, as relevant training can be automatically assigned; whether regularly to all staff or selected groups, to new starters, or to users who have clicked on a phishing link.
Dashboards show the overall level of risk in the organisation and the status of cybersecurity training campaigns and phishing tests.
Training material can be selected from general topics or can be specific. The style of material can be chosen from more traditional presentations to games and Netflix-like series, complete with posters. It can all be combined to help your staff become what KnowBe4 refers to as “strong human firewalls”.
experteq offers KnowBe4 as a managed service. Our clients have all the direct hands-on access they want, with complete support from experteq. While managing training campaigns is straightforward (though experteq can help with that as much as clients want), some of the areas where experteq support is more important is integration with Active Directory (or Azure Active Directory) and Direct Mail Integration for phishing tests, which bypasses most of the ever-changing controls used by Microsoft and other vendors to ensure the test phishing emails arrive safely.
For experteq staff, we use a combination of training styles, with some set as mandatory and others non-mandatory, but promoted. The latter includes some of the video series, like “Restricted Intelligence” and “The Inside Man”, where staff can choose to watch five-minute humorous episodes whenever they have time.
Contact experteq for more information on the KnowBe4 service. | <urn:uuid:fe2bb796-809c-4cd0-99b8-a96b435362b0> | CC-MAIN-2022-40 | https://experteq.com/cybersecurity-awareness-training-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00101.warc.gz | en | 0.948007 | 759 | 2.578125 | 3 |
What is the difference between spear phishing and regular phishing?
Find out how spear-phishing works and what you can do to minimize your security risks.
The short answer to this question is that “phishing” is an opportunistic email attack that targets many people with a “false-premise” (or lie) to trick at least some of them into take an action for the attacker’s benefit. The attacker doesn’t really know who will fall for it. However, a “spear-phishing” attack targets key individuals or groups of people much more precisely, to achieve a higher value goal, and uses more elaborate deceptions that exploit the victim’s trust and emotions, based on what the attacker knows about them. This article explains in more detail how phishing and spear-phishing attacks differ.
Why are phishing and spear-phishing such a problem?
“Phishing” and “spear-phishing” are the most common causes of corporate security breaches. A Trend Micro report highlighted the fact that 91% of cyber attacks begin with a spear-phishing email.
You may already know that phishing messages try to trick people into clicking on links or attachments in emails. But you might be surprised at how much of a disconnect there still is between how people are being targeted in phishing attacks and how well they are able to defend against those attacks. According to Beauceron Security’s latest industry survey of over 20,000 individuals, 12% of the respondents did not know what the term “phishing” means.
It’s logical to assume from this finding that an even higher percentage of people probably don’t know what the term “spear phishing” means. So, if at least 10% of individuals aren’t even aware of what phishing is, the business world has a huge vulnerability, because security technologies aren’t able to detect and block every phishing threat.
In fact, data from Click Armor’s own “Can I Be Phished?” online phishing self-assessment indicates that 90% of individuals who have tried the challenge mis-identify at least one in ten potential phishing messages.
Opportunistic versus targeted emails
You can think of “regular phishing” as an opportunistic kind of attack, where the attackers don’t really care who the victims are; only that they can trick enough people to make it worth their while. “Spear-phishing” is more of a targeted attack, where the attacker has a specific goal in mind, and will spend more time setting up the attack to be successful against specific victims.
As an analogy, think of the difference between a hustler that opportunistically cheats people in Three-card Monte card scams, and the elaborate plots and setups in movies like “The Sting”, “Oceans Eleven” or “Inception”. (All three of these are rated in the top 10 “con artist” movies by “screenrant.com”.) Both methods use various kinds of deception techniques. But the higher value the target is, the more effort it takes to succeed, because they usually have stronger defenses in place.
In both regular phishing and spear-phishing messages, their subject lines and bodies and almost always involve impersonation of somebody the target victim trusts. The difference between the two is mostly in how much effort the attacker puts into their research, and the creation of situations where the target will not suspect that they are being tricked.
How do you know if you’re being scammed in a simple phishing attack, or part of a more evil plot?
Opportunistic attacks are often easy to spot because something just doesn’t look right, and the usually message comes unexpectedly. But it’s not always easy to tell if a message is a spear-phishing attack, and sometimes it’s almost impossible to know at the beginning of an encounter. A basic phishing attack may have a simple request (“Is this really you?”) or notification message that plays on some emotion like fear (“This is your final notice!).
The basic phishing attacker doesn’t need to do much research to be successful, and they don’t really know which victims will fall for it. Phishing attackers usually play a numbers game, where some people will react without thinking, going to a website link or triggering a malware download; and eventually, the victim may enter sensitive information like a password or credit card number, or an unprotected computer will become infected with a virus or ransomware.
The attacker may, however, put signficant effort into making a message appear to be from a commonly trusted sender, such as major websites like Facebook, Apple or Google. These aren’t really targeted at specific people. So, you may see a notification from Apple, even though you don’t use any Apple products or websites. That would be an obvious clue that it is an opportunistic attack.
The message shown here is a typical, opportunistic phishing message.
This opportunistic phishing message looks pretty convincing.
Most phishing attacks – regular phishing and spear-phishing – have some recognizable characteristics in different elements of the message. The main elements of any phishing message are the “emotional appeal” or “hook”, the “sender information” and usually a “hyperlink” or “attachment” that triggers an exploit to infect the computer or try to gather information from the victim through an online form. Each element may have clues that can reveal something useful to you in analyzing whether a message is suspicious or not.
Why are spear-phishing messages harder to detect?
There are usually some people who will fall for a basic phishing attack in most organizations, especially if there isn’t a robust, continuous phishing awareness program. Good security software will stop most basic phishing messages based on commonly known traits, but not all of them, since attackers change methods to avoid being detected. In fact, attackers often test their latest messages against the best security software until they are able to get through undetected in their own tests.
You can usually analyze most phishing attacks to find clues in their sender information, as well as hyperlinks or attachments. Sometimes their message content or subject lines will also provide clues about whether or not the message should be treated as suspicious.
In spear-phishing, the main elements of a given email message are the same, but the attackers have a much better idea of whom they are targeting with more elaborate phishing messages, and a much higher expectation of what they want to gain from a successful attack. They key is that a targeted can leverage the trust and emotions of the victim in ways that make them less likely to actually check for the clues. The clues can also be harder to spot than in basic phishing messages, especially if the message resembles a normal communication.
How do spear-phishing attackers make their messages so convincing?
Just like in popular bank heist movies, there are often multiple steps in an attack, and multiple intermediate targets. The first step in a spear-phishing attack rarely even involves the attacker doing anything suspicious. Since the spear-phishing attacker wants to appear believable to you (assuming you are their target), their first step is to do some research.
We all leave a trail of information behind in almost everything we do, and that’s where the spear-phishing attacker will start. The easiest things for them to learn about you might be from Google searches or searches on social media, to see if your profile is public, and if you have made any public posts, shares or likes.
These bits of information that are publicly accessible can tell an attacker some basic information about you, and sometimes, if you “overshare” or are a somewhat public figure, they can learn a lot about you and the people you normally interact with. People don’t expect that anyone would bother harvesting their social media activities. But that’s exactly what attackers do very easily, and very well.
This kind of research is called “profiling”, and the information gathered this way is called “Open Source Intelligence” or OSINT, for short. In decades past, before social media existed, profiling was often done by attackers who literally searched through the trash bins outside of office buildings. This method was called “dumpster diving”, and even today, dumpster diving is still used to gather information that will help spear-phishing attackers in targeting their victims based on real information they have found that was unprotected. This is why shredding of sensitive office documents is so important.
Convincing spear-phishing message complaining of an overpriced quote
I once received a very well-crafted message that clearly targeted my business, accusing me of having tried to rip off the sender by quoting him higher prices than were in a price list I had sent him. The documents and links used in the message had the name of my business and its proper domain in the “anchor text” – an important term we teach you about in Click Armor’s phishing awarenes training. The tip off for me was that I had never had a price list for that business, and therefore, I could never have sent him one! So, the attacker hadn’t done quite enough research to convince me to click on the link, but I would still call the message shown here a very good spear-phishing attack.
Softening you up with a pretext
Once an attacker has enough information about the people their target trusts, they may create intermediate phishing attacks on those individuals, to build a more elaborate scheme or “pretext”. For example, if the attacker can learn about a company that supplies the victim, they can identify people in the supplier organization, and try to learn more about the victim, or get an introduction to the victim from somebody they trust.
This is called a “stepping stone” attack, and it works very well because most people don’t believe that they are giving the person any information of value that could be abused, or that an attacker would go to so much trouble, just to get an introduction. By the time the target victim gets the spear-phishing attack in this case, they are likely to be acting mostly on the trust that they have in the people whom they “believe” they are dealing with.
Whaling, ice-phishing and watering-holes
There are also variations on spear-phishing attacks, such as “whaling”, “ice-phishing” or “watering-hole” attacks. Whaling attacks target senior executives who may have higher levels of authority and access than other staff, making it more worth-while for an attacker to do research and create more elaborate pretexts.
Ice-phishing attacks take advantage of corporate webmail accounts that have been hacked, just to get access to an account inside the corporate perimeter. The attacker then uses the compromised account to impersonate people within an organization, allowing them to launch attacks on larger targets within the organization.
Watering-hole attacks are messages posted in forums dedicated to subjects that the target victim is known or likely to visit frequently, in hopes of enticing them to engage in discussions to begin a trusted relationship that can later be exploited with spear-phishing emails.
Spotting any kind of phishing attack requires a disciplined routine
The rules of basic phishing email analysis remain basically the same, regardless of whether you are facing a “phishing” attack, or a “spear-phishing” attack. It’s just that you are less likely to follow those rules with somebody you trust. This is why it is extremely important to be always on the lookout, and instinctively ready to defend against messages that seem a little unusual or inconsistent, or from people you have only recently connected with.
Sometimes it is virtually impossible to spot a well-crafted spear-phishing attack, and you might wonder “what’s the point in trying, if they are so good, and nothing is going to stop them”. But the important thing to recognize is that with 20% of employees who don’t know what phishing is, let alone how to analyze a potentially suspicious messages, the attackers don’t really need to go to that much trouble to create a convincing attack.
So, as long as you are in the habit of always checking, you can spot many of them, which will reduce your risk a great deal. Understanding how attackers use social engineering techniques like profiling and pretexting is important for catching the more elaborate attacks where you might be a target, or you might just be a stepping-stone to a larger “spear-phishing” attack. | <urn:uuid:078bb7bd-c33b-4331-8818-021661c16f8c> | CC-MAIN-2022-40 | https://clickarmor.ca/spear-phishing-vs-regular-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00101.warc.gz | en | 0.959371 | 2,708 | 2.6875 | 3 |
Cybersecurity and issues related to hackers, malware, and ransomware are becoming so commonplace across so many sectors that it would be impossible for the media to report on every incident. The recent high profile hacks of utility providers, the Colonial Pipeline, and Solarwinds have garnered national attention, and the need for robust cybersecurity reform has become less an issue of personal identity theft and inconvenience and one of grave national security.
Who are these people who work to penetrate networks, steal data, or otherwise cause disruption? What is it that motivates someone to bring a network to its knees? Here are the top five reasons that hackers hack.
1. Social justice and “Hacktivism”
People that use their skills as hackers to disrupt for ideological or social justice purposes are referred to as “hacktivists.” These hackers are motivated by political issues that they feel are unjust or require greater scrutiny. While most hacktivists operate anonymously, choosing to draw attention to their causes as opposed to themselves, others enjoy notoriety and brand name recognition.
The most well-known hacktivist group calls itself “Anonymous.” The group has claimed responsibility for well known attacks against the Church of Scientology and the Westboro Baptist Church for their views and actions regarding social issues. Anonymous has also engaged in many smaller attacks such as one against Florida’s Orlando Chamber of Commerce in retaliation for the city arresting individuals providing food to the homeless as well as widespread hacks and disruptions of police departments and other authoritative entities that the group has found to be counter to their sensibilities.
2. Hackers can make big money
It’s no secret that there is money to be made in successfully hacking a company or entity known to save valuable information. Some hackers collect this information and sell it online to those wishing to commit fraud by filing fake claims or charging stolen credit cards. Some may purchase the data in order to break into other networks or accounts, thereby accessing even more information.
Currently, ransomware has been the most popular choice for cybercriminals who wish to cash in on stolen data. In a ransomware attack, a hacker will paralyze a company or entity’s network. They will then demand a ransom in exchange for access to their system, threatening to release any sensitive data within if their terms are not met.
In the first quarter of 2021 alone, more than 290 enterprises have been attacked by ransomware gangs, bringing in over $45 million in payments. Recent notable ransomware attacks have been carried out against the Colonial Pipeline, and CNA Financial Corp.
3. State sponsored cyber warfare/espionage
Cyberspace, it used to be said, is the “battlefield of the future.” Current events pertaining to state-sponsored hacks and attacks by groups associated with government entities imply that the future is now.
With governments relying so heavily on information technology for both operation and communication, espionage has gone online. State-sponsored hackers from both Russia and China have been implicated in some of the biggest hacks on US-based companies. The ability to carry out crippling attacks against power grids, utility suppliers, media organizations, and food supply chains is now at the top of the list with regard to acts of war. Not only can these types of attacks have serious, sometimes deadly, consequences to civilians and government officials alike, but they demoralize the opponent by revealing weaknesses in their infrastructure.
Last year’s hack of SolarWinds is believed to have been orchestrated by hackers working on behalf of the Russian intelligence service. The intrusion allowed Russian spies to peruse through top level American government computer systems for months before the attack was discovered. Many experts feel that this attack, while effective, could quite possibly be a precursor to future attacks that are less focused on espionage and more so on disruption and damage.
4. Challenge and notoriety among other hackers
Another factor motivating hackers is ego.
The competitive spirit appears across all activities, and hacking is no different. Some hackers break into or disrupt networks that are known to be well-fortified for the sake of bragging rights or fame. Some hackers look to make a name for themselves by hacking the unhackable, or simply defacing websites or networks in order to gain notoriety.
While this motivation is not as malicious as intentionally looking to steal information, it is nonetheless destructive. Victims of this type of “sport” hacking are left to pick up the pieces in the same way as those hacked for their data.
5. Hackers out for revenge
Some hacks are carried out at the hands or behest of employees or disgruntled former employees. While these attacks are not a typical of hacks where security it overcome (because the employees have login credentials and administrative rights) they can be destructive.
Employees or contractors who feel that they have reason to cause harm to their employer may commit these attacks on their own, or they may give or sell login access to a third party.
What can be done about hackers?
Hackers come from many backgrounds, attack using different methods, and with a variety of motivations. They can all cause significant damage to their targets, are rarely caught, and are growing in both boldness and frequency.
Top levels of government and international banking institutions aren’t the only targets. In fact, one in five small businesses will fall victim to a cyberattack. Of those compromised, more than half are never able to recover.
Here are some cybersecurity steps you can take to help ensure your company is not an an easy target for hackers.
Keep everything up to date
Use a VPN
Using a virtual private network, or VPN, is a great way to keep your internet activity away from prying eyes.
Educate your staff on cybersecurity best practices
A surprising percentage of cybercrime only requires someone opening an attachment in an email. Be sure your staff keeps strong passwords and never reveals their login credentials.
Invest in a cybersecurity audit
Cybersecurity audits allow organizations to evaluate the practices they have in place and make sure that they are being proactive with regard to their data security and compliance. Consider it a checkup for the health of your security!
Don’t keep old login credentials active
Even if an employee leaves the company on good terms, be sure to be proactive and change their login credentials to prevent them from continuing to access data that they are no longer authorized for.
- How To Make $1 Million From Hacking: Meet Six Hacker Millionaires
- What is hacktivism?
- 134 Cybersecurity Statistics and Trends for 2021
- The motivations of a hacker
- More than 290 enterprises hit by 6 ransomware groups in 2021
- Colonial Pipeline cyber attack
- A simple explanation of how the SolarWinds hack happened and why it’s such a big deal
- A Hacker Tried to Poison a Florida City’s Water Supply, Officials Say
- Hackers behind JBS ransomware have new extortion tactic
- SolarWinds: How Russian spies hacked the justice, state, treasury, energy and commerce departments
- Why every small business should care about cyberattacks, in 5 charts | <urn:uuid:5e0163da-8bee-44f1-8d2e-c6b9b9f09b8e> | CC-MAIN-2022-40 | https://news.networktigers.com/featured/why-do-hackers-hack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00101.warc.gz | en | 0.949068 | 1,449 | 2.8125 | 3 |
sakkmesterke - stock.adobe.com
Businesses could be forgiven for pleading tech fatigue as they try to keep up with the fast-developing worlds of artificial intelligence, internet of things (IoT) and cyber. But there’s another breakthrough on the horizon that is demanding attention – quantum technology. Quantum promises improvements to a huge range of technology, including:
- More reliable, tamper-proof navigation and timing systems.
- More secure communications that signal when data is intercepted.
- More accurate imaging, from brain scanners to autonomous car sensors that see around corners.
- More powerful computing to handle more data, faster.
Some of these applications, like quantum computing, are a long way off. But others could be reality within a year or two. The use of these quantum technologies could simply be an evolution, or it could be truly disruptive.
That’s why you need to understand what quantum technology is, what impact it could have and what opportunities it could offer. You might decide to exploit the opportunities right away, you might decide to watch and wait, or you could investigate and decide that quantum won’t affect your industry. But ignoring it isn’t an option.
The word “quantum” will probably summon up images of Stephen Hawking and Albert Einstein. And yes, it is about sub-atomic physics. But you don’t need to grapple too much with the actual science, in the same way that you don’t need to grapple with how a semiconductor works to understand the potential of a smartphone. It’s enough to know that it’s about harnessing what is happening inside atoms.
Technology based on quantum physics has been in our lives for 50 years, whether it’s nuclear power or those semiconductors in our computers and phones. The difference now is that scientists can understand and control the inner workings of atoms more closely. And they can do it in ways that have commercial applications.
The two biggest areas they are working on are (here comes the science) entanglement and superposition.
This is the principle that two atoms can still be connected, or entangled, after they have been separated. And if you change one of them, you affect the properties of both.
One possibility this opens up is a new kind of encryption key for more secure communications. A little like the seal on a jam jar, the key would clearly reveal if data is interfered with after sending.
This is the idea that particles or objects can be in two states at once. Scientists have long understood that electrons can exist as particles and waves, but now they can slow the movement of atoms almost to “absolute zero”.
These “cold atoms” are highly sensitive to motion and magnetic fields. And that, in turn, opens the way to much more accurate sensors and navigation systems.
Investing in quantum technology
The UK government has invested about £400m in its quantum initiative, with a large proportion going to four university “hubs”. It also recently launched an inquiry into the opportunities of quantum technology. The aim is to make the UK a leader in the field through research and quickly creating practical applications for new discoveries.
The businesses with the keenest eye for the opportunity will look to team up with these universities to commercialise their research. To do that, they need to assess what quantum technology means for them and their sector, and understand its applications to build an investment case. | <urn:uuid:f8c837b3-3ef7-4f67-8c5e-eb6d4f816fd6> | CC-MAIN-2022-40 | https://www.computerweekly.com/opinion/Taking-the-quantum-leap-What-is-quantum-technology-for-business | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00101.warc.gz | en | 0.939309 | 732 | 2.65625 | 3 |
A new study finds a vast majority of Americans trust neither the government nor tech companies with their personal data.
What does it look like when a society loses its sense of privacy?
In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it?
Polling would make those answers clear, but polling so far has been…confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer.
And their confidence in their own privacy is ... low.
The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies.
No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:
- According to the study, 70 percent of Americans are “at least somewhat concerned” with the government secretly obtaining information they post to social networking sites.
- Eighty percent of respondents agreed that “Americans should be concerned” with government surveillance of telephones and the web.
They are also uncomfortable with how private corporations use their data:
- Ninety-one percent of Americans believe that “consumers have lost control over how personal information is collected and used by companies,” according to the study.
- Eighty percent of Americans who use social networks “say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.”
And even though they’re squeamish about the government’s use of data, they want it to regulate tech companies and data brokers more strictly: 64 percent wanted the government to do more to regulate private data collection.
Since June 2013, American politicians and corporate leaders have fretted over how much the leaks would cost U.S. businesses abroad.
“It’s clear the global community of Internet users doesn’t like to be caught up in the American surveillance dragnet,” Senator Ron Wyden said last month.
At the same event, Google chairman Eric Schmidt agreed with him. “What occurred was a loss of trust between America and other countries,” he said, according to the Los Angeles Times. “It's making it very difficult for American firms to do business.”
But never mind the world. Americans don’t trust American social networks. More than half of the poll’s respondents said that social networks were “not at all secure." Only 40 percent of Americans believe email or texting is at least “somewhat” secure.
Indeed, Americans trusted most of all communication technologies where some protections has been enshrined into the law (though the report didn’t ask about snail mail). That is: Talking on the telephone, whether on a landline or cell phone, is the only kind of communication that a majority of adults believe to be “very secure” or “somewhat secure.”
(That may seem a bit incongruous, because making a telephone call is one area where you can be almost sure you are being surveilled: The government has requisitioned mass call records from phone companies since 2001. But Americans appear, when discussing security, to differentiate between the contents of the call and data about it.)
Last month, Ramsey Homsany, the general counsel of Dropbox, said that one big thing could take down the California tech scene.
“We have built this incredible economic engine in this region of the country,” said Homsany in the Los Angeles Times, “and [mistrust] is the one thing that starts to rot it from the inside out.”
According to this poll, the mistrust has already begun corroding—and is already, in fact, well advanced. We’ve always assumed that the great hurt to American business will come globally—that citizens of other nations will stop using tech companies’s services. But the new Pew data shows that Americans suspect American businesses just as much. And while, unlike citizens of other nations, they may not have other places to turn, they may stop putting sensitive or delicate information online. | <urn:uuid:e3306d61-45b1-4ec1-a9e5-cdead3ee2ce1> | CC-MAIN-2022-40 | https://www.nextgov.com/cybersecurity/2014/11/why-nsa-surveillance-threatens-silicon-valley/99236/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00101.warc.gz | en | 0.953398 | 936 | 2.515625 | 3 |
Created in 2009 by an unknown source (using the name “Satoshi Nakamoto”) as the first type of cryptocurrency or digital cash, Bitcoin does not have a single bank or administrator to act as a third party in digital transactions. This makes for an easy and secure way to transfer money online from person to person.
Just like cash, Bitcoin can be traded for goods and services with vendors who accept it. Bitcoin exchanges are tracked and stored on a public ledger referred to as blockchain. This system links all peer-to-peer transactions and ensures that they are legitimate.
The popularity of Bitcoin has inspired the creation of other virtual currencies, but Bitcoin remains the most well-known.
It should be noted that the Securities and Exchange Commission (SEC) has not yet approved a bitcoin exchange-traded fund (ETF) as of June 19th, 2019. | <urn:uuid:e5b35618-2feb-45d1-9ece-8707f7b65767> | CC-MAIN-2022-40 | https://aragonresearch.com/glossary-bitcoin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00101.warc.gz | en | 0.959203 | 179 | 2.8125 | 3 |
Hashcode is a fixed length code that can be used to verify integrity. An algorithm known as hash function can be used to create hash codes on messages/files. Ideally, these algorithms have several unique properties.
It is impossible to generate a message o a hash code. Hash functions are known as trap doors because they only go one way from message->hashcode
Small changes in a message lead to a big change in the hash code.
It is extremely unlikely that two messages will generate the same hash code. Should this happen it is known as hash collision.
Now my questions:
- Customers/Partners: Has anyone implemented this hash coding (e.g. for exporting bank statements)?
- IFS R&D: Is there or will there be functionality in IFS to support hash coding? | <urn:uuid:c20186c8-7f78-461a-86fd-8157b611aaa5> | CC-MAIN-2022-40 | https://community.ifs.com/finance-financials-42/hash-code-in-banking-files-to-prevent-fraude-11565?postid=42261 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00101.warc.gz | en | 0.887003 | 178 | 2.546875 | 3 |
In the era we live in today, it is customary to see the emergence of different technologies that increasingly aid our work and routine. Two examples of these technologies are cloud computing and edge computing.
This post will explain what these technologies are, how you can use them, and ways they can improve your business’s performance to help you conquer new markets in Latin America.
Cloud computing vs. edge computing
It is crucial to know their differences to understand which of these technologies is the most suitable for your company. Let’s start by learning the characteristics of cloud computing.
Through a set of technologies, cloud computing allows us to work and use information stored on a remote server.
Thanks to the Internet, it is possible to have remote access to documents, software, and data processing on a server without installing them on the local machine.
To use a cloud computing service, we must first connect to the server through a username and password, quite similar to an outlook or Gmail, where your email information is in the cloud and, to access it, you just have to put your username and password.
One of the most important and striking features of cloud computing is its flexibility. Thanks to this feature it is possible to classify the cloud into three types, public cloud, private cloud, and hybrid cloud.
Learn more about the types of cloud computing in this article.
Benefits of cloud computing
- Cost savings without your own infrastructure
- Pay-as-you-go flexibility
- Greater scalability
- Ease of remote access
Now that you know a little about cloud computing, let’s get to know what edge computing is all about.
Edge computing is recognized by many as the technology of the future since it consists of developing computing systems that can provide instantaneous and real-time response.
Edge computing is a distributed architecture that reduces latency by being geographically close to the end-user. This is where its name comes from since it takes place at the “edge” of the network, which, at the same time, is the place where end-user devices access the network, such as smartphones, computers, robots, sensors, etc.
In the context of connectivity, it is crucial to remember that the term “edge” is not the same as “edge eomputing”
The edge is where devices connect to deliver data and receive instructions, from the cloud or a data center. Thanks to the rise of IoT, there was a need for a more robust and faster connection with an immediate response, leading to edge computing.
With edge computing, the device collecting the data can process and store it in real-time, without transferring it to another location.
The primary purpose of edge computing architecture is to reduce latency and decrease the bandwidth consumed. In this way, the connection experience is much more powerful and, at the same time, reduces costs.
Edge computing is one of the technologies required to support the growth of IoT, where the vast majority of devices are connected and require real-time response.
Edge computing has different applications such as facial recognition, virtual assistants, security cameras, and even fun filters for social networks.
Benefits of edge computing
- Reduction of bandwidth costs, since it is not necessary to send all data to the cloud continuously
- Reduction of latency issues
- Agility and speed in decision making and response times for the user
- Increased privacy control of information as it is stored locally
- Faster and more uniform user experience
Which is better for my business?
Knowing whether cloud computing or edge computing is better for your business model depends solely on your company’s objective. We know that not all companies are the same and therefore the needs are completely different.
Cloud computing is preferable for companies that use large amounts of information storage for software or applications that do not require real-time processing.
An example for the use of cloud computing is the agribusiness that uses temperature and humidity sensors on crops. The data collected by these sensors is precious. However it does not require immediate action in an emergency and can use cloud storage.
On the other hand, if your business requires robots or manufacturing process machines where it is necessary to have a real-time response, it is essential that you use edge computing technology.
We can use as an example the oil industry for edge computing, which uses sensors in the valves for the refinery. If these sensors detect a pressure increase, it is necessary to activate the shutoffs and take the necessary measures. If there is high latency and the system does not process the information, an accident can occur.
Edge computing for telecommunications
For telecommunications service providers, it is essential to adopt edge computing to move services outside the core network to POPs closer to the end-user.
This alternative drastically reduces latency problems and improves the application experience. It should be noted that it is very important to have a good ecosystem and a robust infrastructure that allows a high quality connection.
If you want to know more about cloud computing and edge computing do not hesitate to contact us. At EdgeUno we can deliver highly customized solutions to help you grow your business. | <urn:uuid:df481a2e-4774-4a8b-9bff-128572a1ee0f> | CC-MAIN-2022-40 | https://edgeuno.com/blog/cloud-or-edge-computing-which-is-better-for-my-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00301.warc.gz | en | 0.931302 | 1,064 | 2.65625 | 3 |
Coroutines have been known as a concept and used in various niches for ages, since Melvin Conway coined the term in 1958. Coroutines are lightweight, independent instances of code that can be launched to do certain tasks, get suspended, usually to wait for asynchronous events, and be resumed to continue their jobs. Coroutines make it easier to build highly-concurrent software that performs many tasks at the same time or keeps track of many independent event streams.
A short history
Coroutines used to be a popular concept in the programming languages of 1960s-1980s era, but they were largely forgotten and fell out of curriculars as multithreading became wide-spread. Traditionally, mainstream programming languages of 1990s-2010s had provided two chief ways to do things concurrently. One is to start an OS-hosted thread for each parallel activity, which works fine as long as you don’t need thousands of them. The other approach is to do some form of event-loop based programming with callbacks, extremely popular for UI programming and sometimes also used as a basis for highly-scalable input/output libraries on the backend. However, callbacks make the code quite complicated, hard to reason about and to debug, especially when different concurrent activities have to keep and update their own state.
- Five things to look for while starting a coding career (opens in new tab)
An advance of asynchronous programming
What used to be a niche problem is now becoming mainstream one both in backend and frontend development. Most software used to be CPU-bound, locally solving its own problems. Asynchrony in software used to be centered around interaction with a person sitting at the keyboard and callback-based approach to coding these interactions served well. Nowadays, everything is networked. Mobile and web applications use dozens of services, monolith architectures on the backend are being replaced with hundreds of interacting services. A software system that used to spend time computing something on a local machine, now often waits for some other service to return the result of the computation.
A code that is waiting for asynchronous events becomes a programming norm, not an exception; concurrent communication is standard — the modern software does not tend to show us a blocking “please wait” message as it used to do in the past. With threads being too expensive and callbacks too cumbersome for this problem, there is a fresh rising interest in the concept of coroutines.
For many developers coroutines are a new concept. Developers are either not being taught any kind of programming practices for concurrency at all, or are being taught classic thread-based and event-based approaches to concurrency. So, there are two main directions from which modern programming languages approach this emerging problem of light-weight concurrency.
One approach is to give programmers a very thread-like and familiar programming model, but make threads lightweight. Most notably this approach is taken by Go programming language (2009), which does not have a concept of a thread in the language, but a goroutine, which is essentially a coroutine that is dressed into a very thread-like form. A somewhat similar approach is being worked on by the Java team under the codename of Project Loom. At the time of the writing, the plan is to leave threads directly available to developers but introduce an additional concept of a virtual thread, that is lightweight but is programmed very much like a thread from a developer standpoint. That is the main advantage of this approach, making it easy to learn and easy to port legacy thread-based software to, but also its main weakness, because programming reliable software in a world that expects ubiquitous concurrency requires different engineering practices from the world where threads were a few.
In particular, in a modern world of networked software it becomes quite useful to distinguish between local CPU-bound computatiations, that are usually fast or, at least, take predictable time, and between asynchronous requests that are orders of magnitude slower and may make take unpredictably long time due to network congestions and 3rd party service slowness. Ironically, “A Note on Distributed Computing” by Sun Microsystems, the now-classic paper, argued that the two shall never be conflated back in 1994, yet this insight was largely ignored in the design of systems for almost two decades, during an era of failed attempts to build distributed communication architectures that make remote operations indistinguishable from local ones for developers.
- Why coding is vital for our future (opens in new tab)
Rebirth of coroutines
An opposite to thread-like approach is the introduction of coroutines into a programming language as a separate concept, specifically tailored and distinctly shaped for the world of massively asynchronous software. Initially, this approach used to be popular chiefly among single-threaded scripting languages that do not provide an option to use threads to their full extent.
A color of your function
The main disadvantage of async/await concurrency, that the thread-like concurrency does not have, is now known as a problem of red/blue code, as explained by Bob Nystrom in his 2015 blog post “What Color is Your Function”. When using async/await you have to write asynchronous code in a visually quite different manner from the regular, computational code.
This concern leads to a variation of async/await approach to coroutines that takes a different syntactic form to mitigate the problem, so that asynchronous code syntactically looks the same in the source, yet retains the advantage of marking the parts of the code that could end up indefinitely waiting for external events. This path was taken by Kotlin in 2017. Kotlin coroutines are implemented using a suspend keyword to mark functions that can suspend the execution of coroutines, without mandating any kind of distinct await expression in the logic of the program itself. In essence, it is an async/await-inspired implementation — there is a marker for async functions in the code, but without having to mark their calls with await.
A road to structured concurrency
Coroutines are useful, are gaining popularity, and are here to stay, which means that developers will need to learn best practices of using coroutines. One particular trend, that is gaining traction because of coroutines, is the structured concurrency paradigm.
Coroutines enable writing highly concurrent and asynchronous software seemingly at ease, yet every coroutine the code launches risks being accidentally suspended for a long time, waiting for events or responses that might even never happen. That creates a new way to leak resources in a software that developers come unequipped and unprepared to deal with. Structured concurrency is a discipline of encapsulating concurrent pieces of code in such a way as to prevent those kinds of leaks from happening and to make concurrent software easier for humans to reason about.
This paradigm shift that is happening right now is akin to the ascent of the structured programming paradigm that was sparked by the Dejikstra’s famous “Go to statement considered harmful” in 1968 and had culminated in the universal adoption of structured programming in all languages we program in today.
We are still living in the world where most of the concurrent software is being written in an unstructured way, an analogy with the old days of code written with GOTO statements that was aptly captured by Nathaniel Smith in his “Notes on structured concurrency, or: Go statement considered harmful”. Yet, all the languages that are introducing light-weight concurrency paradigms are also adding library abstractions for structured concurrency. Just like it happened with structured programming in the past, we can foresee that in the future a structured approach to concurrency becomes a default that is enforced by programming languages and their concurrency libraries.
- Why coding is crucial for the future of businesses (opens in new tab)
Roman Elizarov, Kotlin Libraries Team Lead, Kotlin (opens in new tab) | <urn:uuid:a17cf1ae-9896-4f53-aa03-8c175c0032b1> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-rise-of-the-coroutines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00301.warc.gz | en | 0.959975 | 1,872 | 3.1875 | 3 |
In the previous posts, we have examined the insider threat from various angles and we have seen that insider threat prevention involves the information security, legal and human resources (HR) departments of an organization. In this post, we want to examine what information security departments can actually do to detect ongoing insider threats, and even prevent them before they happen.
Overall, insider threats represent only a small proportion of employee behavior. And while only the ‘black swan’ incidents become public knowledge, minor incidents such as theft of IP or customer contact lists will add up to major costs for organizations.
In addition, insiders are by default authorized to be inside the network and are both granted access to and make use of key resources of an organization. Given the large pile of access patterns visible in an organization’s network, how is one to know which ones are negligent, harmful or malicious behavior?
IT departments typically respond to the insider threat, if at all, by extensive monitoring and logging. The aim is to at least be able to do forensic analysis when a threat is happening and doing damage, and support the legal department with any investigations.
Obviously, such an approach will not help prevent the threat in any way. Recent updates to monitoring solutions such as Sure View and research programs of the US government have started taking a more proactive approach to detect a threat while it's happening, and even before it happens. We have seen that the psychology of the insider is very complex and that the insider typically takes precautions to evade detection, so how could a software solution reliably identify what is a threat and what is not?
The problem of detecting the insider threat before it actually happens is as difficult and complex to solve as the prediction of human behavior itself. What is the next action of a person? Which action will be inside the scope of assigned work for that person? Which action will indicate the preparation for an attack by that person?
Recent technological advances have shown significant improvements in predicting what was previously considered unpredictable – human behavior. Despite some initial setbacks, systems such as Google Now, Siri, or Cortana aim to predict users’ needs before they even know them.
This is becoming possible due to the vast amounts of behavioral data that has been collected and indexed, and the computational resources available for analysis have reached a critical mass for the deployment of large-scale artificial intelligence methods such as voice recognition, image analysis and machine learning. The term for this new predictive analysis of large amounts of behavior data is data science.
It is nowadays applied to various problems and areas, and could similarly be applied to the insider threat problem. As described above, an insider’s behavior is per definition authorized inside an organization’s network and there is typically not enough information available to derive an insider’s intention or psychology in real-time. However, as the amount of collected behavior data increases, there are more and more cues that could be revealed.
An initial data science approach is to learn commonly known indicators for insider threat behavior. These might be authorized behaviors, but are typically associated with a an insider who has veered off course. An example is exfiltration behaviors such as uploading data to a dropbox account, extensive use of USB sticks or high volume of downloads from internal servers. These indicators are specific enough to catch an ongoing attack, but only a limited set of attack types can be detected in this way (those for which the indicators are known).
In order to catch future -- and unknown -- attacks, a second approach is to focus on anomalies in observed behavior. An anomaly is something that deviates from what is standard, normal, or expected.
In the realm of behavior, a data science solution will analyze behavioral data and learn what is normal. ‘Normal’ behavior can refer to normalcy with regard to all observed behavior variations, an individual’s behavior over time or even social behaviors. Once a baseline of normalcy is established, outliers can be identified.
Knowing that insider threats are paired with changes in behavior of the individual in question, anomaly detection will reveal these, even in the early stages of a threat. However, this improved detection comes with a price: a higher number of false positives. Benign changes in behavior (e.g., due to job function or team changes, or coming back to work after the holidays etc.) will trigger detections and the amount of these detections can become overwhelming.
A third (and most advanced) data science approach is to generate narratives from output of the first and second approaches, i.e. combine indicators and anomalies to generate an understandable interpretation of the behavior going on inside an organization. The latter is obviously a hard nut to crack because, ultimately, it will involve creating a truly artificial intelligence. But we are getting there… | <urn:uuid:e54fcd15-2281-40eb-aa79-d8d51b8d4ab3> | CC-MAIN-2022-40 | https://www.vectra.ai/blogpost/detecting-the-insider-threat-how-to-find-the-needle-in-a-haystack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00301.warc.gz | en | 0.947521 | 969 | 2.625 | 3 |
This section describes how to configure an IP address and default gateway to your DNS/DHCP Server.
The procedure for configuring a DNS/DHCP Server and adding it to Address Manager will vary according to the number of interfaces on your DNS/DHCP Server appliance, and the number of interfaces that you want to use. Each DNS/DHCP Server interface (including eth1) should be on a separate network to avoid any potential issues.
- 2-port DNS/DHCP Server
- 3-port DNS/DHCP Server
- 3-port DNS/DHCP Server VM
- 4-port DNS/DHCP Server
|Number of ports||eth0||eth1||eth2||eth3|
|2-port DNS/DHCP Server||Services / Management||xHA||N/A||N/A|
|3-port DNS/DHCP Server||Services||xHA||Management||N/A|
|4-port DNS/DHCP Server||Services||xHA||Management||Redundancy|
- DNS/DHCP Server features support for Dedicated Management on 3 and 4-port DNS/DHCP Server appliances and 3-port DNS/DHCP Server virtual machines, isolating all management traffic onto the eth2 interface and all services traffic onto the eth0 interface.
- DNS/DHCP Server appliances with four network interfaces can be configured for Services (eth0), xHA (eth1), Management (eth2), and Redundancy (eth3) through port bonding (bond0=eth0 + eth3). DNS/DHCP Server VMs can be configured with three network interfaces to support Services, xHA, and Management.
- DNS/DHCP Servers can be configured with multiple VLAN interfaces. For details, refer to VLAN Tagging. | <urn:uuid:7102e6b3-3de3-4a62-83cf-5cd789d0aac4> | CC-MAIN-2022-40 | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Getting-started-with-DNS/DHCP-Servers/9.1.0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00301.warc.gz | en | 0.705513 | 381 | 2.640625 | 3 |
Principle of Least Privilege (PoLP)
The Principle of Least Privilege (PoLP) is an approach to information technology or information security that states that every part of a system — user, device, application, etc. — be granted no more than the minimum degree of authority required to function.
The PoLP helps admins optimize their infrastructure in three main ways. It enhances system availability through the offsets created by far fewer admin, write (vs. read-only), and other aspects of the fuller control admins have. The It also helps increases security through the aforementioned efficiency but also limits the threat vector, and the ripple effect of security incidents. The PoLP also, in regards to applications especially, makes it easier to deploy applications when most users do not have complex roles and responsibilities.
The PoLP is also known as the principle of least authority or the principle of minimal privilege.
"I know you're running into some access issues that are hindering your work. Right now your permissions are our default least privilege ones. As soon as we get through some red tape, we'll get your permissions sorted and you'll have visibility into and editing power for most everything you've requested. Within reason, of course." | <urn:uuid:8768ccd3-4f08-41cb-9ed6-899968f8be07> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/privilege | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00301.warc.gz | en | 0.94594 | 252 | 2.671875 | 3 |
One of the most important commands in Linux contained a rather nasty security flaw that could have let malicious types gain root access to the operating system. The bug, which has since been squashed by developers, was found in the sudo command that is used by developers to carry out tasks and run stuff with elevated privileges. Sudo only enables this if users of the command have the right permissions to do so on a Linux machine or know the root user’s password. But the command appears to have been a little too effective. It could have allowed hackers with enough access to run sudo on a Linux machine to gain root access even if the configuration of Linux they were accessing would not have normally allowed it.
For the vast majority of Linux users, this bug will not affect you as you need to specifically grant a user access to sudo as another user for a particular command. Even then, that command must be able to perform privileged security tasks or to execute other commands.
— BleepingComputer (@BleepinComputer) October 14, 2019 | <urn:uuid:eb46d9f2-d234-4dcd-8f77-10b3dd4a4431> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/linux-sudo-command-bug-enabled-hackers-to-gain-root-access/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00301.warc.gz | en | 0.960417 | 208 | 2.953125 | 3 |
802.11k and 802.11r are both standards designed to create a more seamless roaming experience for wireless clients. This is particularly useful for VoIP or other applications where long roaming times can result in a very noticeable impact on performance.
Why does roaming occur?
A wireless client will decide to roam to a new access point (AP) when it detects a better signal from that AP than the one it is currently associated with. This behavior is normal, particularly when devices are moving around within an environment, such as laptops, tablets, and mobile phones.
Why do clients sometimes experience service interruptions when roaming?
When a client roams to a new AP it needs to establish an association/authentication relationship with that AP. In situations where the APs are acting independently of each other, this whole process must occur each time the client moves to a new AP. Without the inclusion of standards like 802.11k and 802.11r, the client will disconnect from it's existing AP before connecting to the new one. This results in a period of time where the client has no network access. This can be manifested in the form of packet loss, dropped calls, or other negative performance.
How do 802.11k and 802.11r help?
Both standards take different measures to reduce the time required for a client to roam between APs in the same network, and thus reduce the impact of roaming on performance.
- 802.11k reduces the time required to roam by allowing the client to more quickly determine which AP it should roam to next and how. The Cisco Meraki AP the client is currently connected to will provide it with information regarding neighboring APs and their channels. This way when the client is ready to roam, it has a better idea of where it will be roaming to.
- 802.11r uses Fast Basic Service Set Transition (FT) to allow encryption keys to be stored on all of the APs in a network. This way, a client doesn't need to perform the complete authentication process to a backend server every time it roams to a new AP within the network. Thus avoiding a significant amount of latency that would have previously delayed network connectivity.
Configuring 802.11r in Dashboard
This feature can be enabled from the Configure > Access control page under Network access > 802.11r. If this option does not appear, a firmware update may be required. 802.11r is also not available while using NAT mode or Layer 3 roaming.
Note: 802.11r is intended for use on SSIDs that use enterprise authentication methods.
Fast Transition (FT) 802.11r roaming is not supported between Meraki MR55/MR45 and any other MR Access Point (AP) running version 25.x or lower. If you have a mixed deployment with MR55/MR45 and any other model of Meraki APs and 802.11r is either set as enabled or adaptive on any of the SSIDs configuration ensure all your APs are running version 26.4 or higher.
For more information on this and related topics: | <urn:uuid:0ede7ac9-effe-4218-b526-56c62b94f9fe> | CC-MAIN-2022-40 | https://documentation.meraki.com/MR/WiFi_Basics_and_Best_Practices/802.11k_and_802.11r_Overview | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00301.warc.gz | en | 0.9194 | 636 | 2.9375 | 3 |
For a long time, android applications have been the most popular choice of users over desktop applications. The reason is simple – it is easy to use and user-friendly. Moreover, there is a wide variety of applications available for Android devices. Android purpose-built application development is one of the most prominent choices for business owners and mobile app developers since Android is the world’s most popular mobile operating system.
Insecure Android applications pose a threat to the users’ privacy and security. Moreover, such apps can also result in financial losses. This is mainly because of the openness of the Android ecosystem. Mobile applications are more vulnerable to cyberattacks than ever before. One of the best ways to improve the security of an android app is to perform android penetration testing.
This blog post will explain how vital android penetration is, how it helps companies be secure from hackers and cyber-criminals, and much more about android penetration testing in detail.
What is Android Penetration Testing?
Android penetration testing is a process of finding security vulnerabilities in an android application. It is a systematic approach to searching for weaknesses in an Android app, verifying the app’s security, and making sure it abides by the security policies. It includes trying to attack the android application by using various methods and tools.
The primary aim of android penetration testing is to find the vulnerabilities in the application and fix them before cybercriminals exploit the vulnerabilities. The security issues are mainly related to data theft, information leakage, etc. The android penetration testers generally do the android application penetration testing.
Understanding Architecture of an Android App
An APK file is an archive file, and its primary use is to distribute the application’s binary files to the end-user. The APK file is a separate file from the Android operating system. Applications are installed on Android devices through the APK file, which is installed on the device’s system partition.
Check out the architecture of a decompiled APK file mentioned below:
Why is Android Penetration Testing essential?
In today’s world, android apps are used for multiple reasons, such as mobile banking, shopping, sharing personal details, social networking, and entertainment. The android devices are prone to threats from various hacking techniques, such as buffer overflow, code injection, reverse engineering, malware, etc.
The identification and penetration testing of the vulnerabilities in the android applications to identify and resolve the weaknesses in the applications is referred to as Android penetration testing.
Some of the benefits of the android penetration testing are:
- Uncover security risks of android applications.
- Improve the efficiency of the application.
- Gaining customer trust
- Decrease cost of the data breach
What is OWASP Mobile Application Security Project?
The Open Web Application Security Project (OWASP) has been a global charitable organization working to make the web a safer place.
The OWASP Mobile Security Project includes a list of the top ten security risks that mobile applications face today. Each of the top ten mobile security risks is ranked by its threat level and further investigated. Let’s understand each one of these in detail:
M1: Improper Platform Usage
Improper Platform Usage is a risk that is very important to identify. This is because it can have a significant impact on your data or devices. This risk involves the misuse of an operating system feature or a failure to use platform security controls properly.
This may include Android intents, platform permissions, the Keychain, or other security controls that are part of the platform.
M2: Insecure Data Storage
Data security can be defined as the security surrounding any data that is stored or transmitted. Data of android applications are stored in various locations like servers, mobile devices, and cloud storage. All of these locations are susceptible to attacks by hackers. To protect the data from these attacks, the data needs to be stored securely.
M3: Insecure Communication
Insecure communication is sending sensitive data over non-secure channels. When sending data over non-secure channels, it can be intercepted by anyone who has access to this channel, which is everyone on the same network.
This means that if you are sending sensitive data, the data can easily be copied. This is very common in public WiFi access points. When using public WiFi access points, you should always assume that your data is being intercepted.
M4: Insecure Authentication
Authentication is a mechanism to prove a user’s identity to a system. It is also a process of initializing and maintaining a “state” on the system (e.g. a session or a login state), which can be used to determine the user’s identity.
Weak authentication is one of the root causes of many security risks. Attack vectors such as authentication bypass, information disclosure via debug messages, session invalidation are typical examples of insecure authentication.
M5: Insufficient Cryptography
While cryptography is a fundamental part of any app that stores user data, there is a common misconception that cryptography can solve all security problems. Cryptography is just a tool that helps to protect the data from attackers.
If any weak point is found in the cryptographic implementation, an adversary can still access sensitive information. In this blog post, we will walk you through the most common cryptography mistakes and how to avoid them.
M6: Insecure Authorization
Authorization is a process that ensures that only authorized individuals who are allowed to access the data are performing the access operation. Authorization is a crucial aspect of the CIA triad. Many mobile applications have improper authorization implemented due to which low-level users can access information of any high privileged user.
M7: Client Code Quality
Application code quality is the essential factor in ensuring the quality of the final product. As a developer, you should have several goals for your application. Many security flaws can occur in a mobile application, but the most common ones are SQL Injection, Cross-Site Scripting, and Buffer Overflows. The reason why these security flaws occur is because of the poor quality of the client code.
M8: Code Tampering
Code tampering is a process in which hackers or attackers exploit the existing source code of an application by modifying it with malicious payloads, which can lead to business disruption, financial loss, and loss of intellectual property.
The issue is usually found in the mobile apps that are downloaded from third-party app stores. These app stores are not associated with the official mobile application developers and usually distribute pirated apps.
M9: Reverse Engineering
Reverse Engineering is a process to decompile the mobile application to understand the application logic. Code obfuscation is done to prevent attackers from reading the application code and understanding the logic.
M10: Extraneous Functionality
Bad actors such as cyber-criminals or hackers try to understand the mobile application’s extraneous functionality. The main goal is to understand and explore hidden functionalities of the backend framework.
SSL Pinning: What and Why?
The SSL pinning is a process of ensuring that the communication between the application and the server is encrypted using robust cryptographic algorithms. The communication is only possible if the server uses the correct certificate or Public Key.
SSL pinning is used to prevent the Man in the Middle (MIM) attack. This attack is possible when an attacker can communicate between the end-user and the server. The attacker can then record the communication between the end-user and the server. This is known as the man-in-the-middle attack.
Focus Areas for android penetration testing
1. Data Storage
Testing for storage of data in an android application is an integral part of android penetration testing. These tests should include:
- Checking for Hardcoded credentials
- Sensitive data exposure such as API keys or tokens
- Encryption and Weak cryptography
2. Application-level communication
Communication of application with other applications and with the application’s servers can lead to critical security issues if the communication between can is not done via a secure channel. Hackers use man-in-the-middle attacks to intercept the communication between mobile applications and servers.
3. Debug and Error messages
While developing an android application, developers use different kinds of error or debug messages to understand different application-level errors. These error messages are usually left even after production.
Hackers use these error messages to understand the flow of the application and hidden functionalities of the application.
4. Authentication & Authorization
Authentication and authorization are key areas to test while performing android penetration testing. These tests should include:
- Session related security issues
- Storage of session token
- Authentication checks on sensitive endpoints
- Improper access controls
5. Code Obfuscation
The process of obscuring code to conceal its purpose is known as code obfuscation. Obfuscation leads to a code that is difficult to reverse engineer. Obfuscation is used as a method of protecting intellectual property as well as for anti-tampering.
Obfuscation is done by adding meaningless symbols (such as variable names like $i), changing the order of operations (i.e. changing the order of mathematical operations), or by using different languages (for example, by using a hexadecimal or other representations)
Related Blog – A Deep Dive into Mobile Application Penetration Testing
5 Secure Coding Practices for Android Developers
1. Communication over HTTPs
Communicating over HTTPs is not a new concept for the web. It’s something that should be standard practice for any business or company. The only problem with using HTTPs is that it isn’t an option everyone can use. It requires modification to your current infrastructure while it also requires you to re-enroll into your SSL certificate.
Even though the benefits of using HTTPS are apparent, plenty of companies still don’t use it. The argument for using HTTPs is usually the same: it’s not worth the cost, or it’s not an option. However, the argument shouldn’t be whether or not it’s worth the cost, but if using HTTPs will improve your business, which it will.
2. Encrypting sensitive data
Data encryption is the process of changing information to make it unreadable without secret information or a key known only to authorized parties. Encryption is used to protect data so that unauthorized parties cannot read it.
Data encryption can be used to protect data travelling between two computers over the Internet, or it can be used to protect data stored on a hard drive. Data encryption can be used to protect data from being read or changed by malicious programs. Encrypted data is locked up in a way that only authorized parties can access it.
3. Ask for credentials before showing sensitive information
Secure android applications use data masking and password or biometric-based authentication to show or display sensitive data such as API Keys.
4. Use common error messages
As discussed earlier, error messages can lead to the discovery of hidden functionalities of the application. To avoid these security risks, developers should use common error messages and remove the debug errors or logs once the app is live.
5. Check the validity of external data sources
External storage can be used to store data that are used by your application. This can include data about your application, such as a list of the most recent documents opened by the user or data that your application uses to do its work, such as a database containing a list of customers.
The issue here is that you have to make sure that the data stored in external storage hasn’t been corrupted or modified by anyone else.
Top 3 open source tools for android penetration testing
Android penetration testing is done via the various number of tools but let’s check the top 3 tools that are usually used:
- MobSF: MobSF is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis.
- Frida: Frida is a dynamic instrumentation toolkit that is used by developers, reverse-engineers, and security researchers
- Apktool: Apktool is used for reverse engineering/decompiling any apk file. Using various Linux commands, android penetration testers find sensitive data.
What is Astra’s Android Pentest Suite?
Astra’s pentest suite is a complete solution to all your security needs. Astra makes it easy to perform controlled attacks on android devices with an easy-to-use interface and a streamlined workflow.
At Astra, we understand your needs and keep them as our top priority while performing any penetration test. With new features like login recorder, and the GitLab integration, Astra is pretty unparalleled on the feature front.
In a nutshell, there are many reasons why you should be thinking about penetration testing your Android apps. Whether you’re a startup that’s just getting off the ground or a large corporation, the need for penetration testing on Android applications is accurate, and it’s here to stay.
1. What is the timeline for Android pentesting?
It takes no more than 7-10 days to complete android penetration testing. The vulnerabilities start showing up in Astra’s pentest dashboard from the 3rd day so that you can get a head start with the remediation. The timeline may vary with the pentest scope.
2. How much does android penetration testing cost?
The cost of Android penetration testing with Astra’s Pentest suite ranges between $349 and $1499 per scan depending on the plan and the number of scans you opt for.
3. What makes Astra your best choice for Android pentesting?
1250+ tests, adherence to global security standards, intuitive dashboard with dynamic visualization of vulnerabilities and their severity, security audit with simultaneous remediation assistance, multiple rescans, these are the features that give Astra an edge over all competitors.
4. Do I also get rescans after a vulnerability is fixed?
Yes, you get 2-3 rescans depending on the plan you are on. You can use the rescans within a period of 30 days from initial scan completion even after a vulnerability is fixed. | <urn:uuid:2ceb2c5a-daf5-4db9-a87a-28676f3bd821> | CC-MAIN-2022-40 | https://www.getastra.com/blog/security-audit/android-penetration-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00301.warc.gz | en | 0.909258 | 2,945 | 2.828125 | 3 |
Choosing a unique, complex and long enough password that will still be easy to remember is a big challenge for most users, and most of them would happily opt for biometric authentication in a heartbeat.
But the problem with physical biometrics – fingerprints, palm prints, iris shape, etc. – is that you can’t change them if they get compromised. A good solution to that problem might be in the combination of physical and behavioral biometrics and a password.
Lip movement + password
An elegant and relatively easy to use option is the “lip motion password” – a technology invented by Hong Kong Baptist University computer science professor Cheung Yiu-ming, and patented in the US in 2015.
The technology uses a person’s lip motions to create a password, and the system verifies a person’s identity by simultaneously checking whether the spoken password and the behavioural characteristics of lip movement match.
The system takes into consideration the lip shape and texture as the user voices (or simply silently mouths) the password, and is able to detect and reject a wrong password uttered by the user or the correct password spoken by an imposter.
“The same password spoken by two persons is different and a learning system can distinguish them,” the professor noted. So, even if an attacker knows the password, it’s impossible for him or her to use it to successfully impersonate the target.
And if, by any chance, the attacker has managed to record a video of a user’s lip while he or she was pronouncing the password, a simple change of the actual content of the password is enough to prevent future impersonation.
The technology has some more advantages: it is less susceptible to background noise and distance than traditional voice-based authentication, it’s language-independent, and can also be used by speech-impaired users. It can also be used in combination with other biometric authentication systems to improve security levels.
“Lip reading” biometrics is expected to be used – either alone or in combination with other authentication measures – in financial transaction authentication (e.g. at ATMs, electronic payment using mobile devices, etc), as well as in physical access control systems (e.g. to open doors to private or business properties). | <urn:uuid:6739734d-811a-4238-abe8-d9d8a3e1dcab> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2017/03/16/lip-movement-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00501.warc.gz | en | 0.925172 | 479 | 2.6875 | 3 |
The population of our planet is nearly 7.5 billion out of which, more than half, that is approximately 4.1 billion are active online. With the availability of digital media, the menace of fake news has grown tremendously. So much that it can be called an era of fake news. Media houses worldwide have realised the importance of the deployment of technologies such as AI and ML to combat the spread of fake news. Even governments are urging the likes of Facebook and WhatsApp, the primary mediums via which the false news can be made viral, to find out the source of the data in case of fake news.
Increasingly, the masses have started sensing that fake news is some sort of a weapon being used to destabilise the society and it is becoming more and more challenging to separate fake news from the real ones. Especially during the run-up to elections or other such major political events, fake news and memes become the tools the perpetrators deploy to influence the outcomes.
Because of the viral nature of fake news, lies spread faster than the truth on social media and organisations like WhatsApp and Facebook have committed to come up with methodologies to rank the stories based on the trustworthiness of the source from where it is being generated.
Stress on the fourth V- Veracity
We are living in the age of data explosion – just to put things in perspective, in one minute, 160 million emails are sent, around 100,000 tweets are shared on Twitter, and more than 500 videos are uploaded to YouTube. We are talking about Big Data here – data which is described using 5 Vs – Velocity Volume, Variety, Veracity, and Value. In case of fake news, the fourth V, that is the veracity of the data becomes critical.
For such a humongous volume of data, the procedures which human fact-checkers apply to determine if a story is real or not, just cannot be used. It is impossible for humans to check each story manually. Only machines can keep up to this volume and do the job of checking quickly.
It is no surprise that leading content publishing and networking platforms such as Facebook have invested heavily in research on combating fake news and are leveraging Artificial Intelligence to identify and cull out the good news from the bad. Artificial Intelligence & Machine Learning have become the cornerstone for identifying falsehoods by recognising patterns, behaviours as well as by identifying articles which were flagged as inaccurate by people in the past. AI & ML will become more critical for identification purposes as the large volumes make combating this phenomenon more challenging. The key benefit of AI and ML is that with the growth in data these systems become more trained and resultantly more robust in pointing out false news.
Fake News Detection
There are various parameters based on which, fakes news is detected. Some of these include
Fact Weighing: The facts in an article or news need to be weighed contextually. An NLP engine can be used to scan the content as well as the geolocation for contextualising. The facts can also be assessed based on where else the same facts are being reported. For example, it is easier to understand the motive behind spreading falsehoods once we have geolocation or the context.
Source Checking: Generally, every article states a source. The reputation of the source needs to be validated to understand the authenticity of the news item. For example, an article with an unknown source will be rinsed more when compared to an article with legitimate and well-known sources. Machine learning algorithms can help in quickly checking the reputation of the source.
Word Scan: Generally, fake news contains words which are sensational in nature. The true aim to smatter the article with those contents is to make it go viral. Using tools like keyword analytics and artificial intelligence, such contents can be quickly identified and it is easy to weed out the ones with false information.
Publisher Ranking: Typically, parameters such as other articles on the same publisher, the publisher’s social handles, Wikipedia page, and traffic to the site can be reviewed to rank the publisher. Reputable sources are likely to flair well on all these parameters. AI and ML also use methods like Support Vector Machine algorithms to detect political biases of the publications.
Machine Learning Algorithms have been in use for fighting spam emails based on the email text analysis. These algorithms also detect whether it’s a one on one communication or is it a mass mail. Building on the principles of fighting spam, it is becoming easier to curb the spread of fake news. Companies are also collaborating to conquer fake news. For example, AdVerif.ai has tied up with Cisco to enhance its offering for fake news spotting operation.
Media houses and well-known platforms are taking the help of big data, artificial intelligence, and machine learning to detect fake news, dishonest and misinformation and build trust and transparency with their readers and users. AI and ML as a technology for combating fake news is still at infancy with the detection accuracy of 60 – 70 percent. But the future certainly looks bright. With emerging technologies such as Blockchain, we are sure that these systems will mature and become sophisticated enough to be trusted entirely. | <urn:uuid:d7a88bf3-b44e-4b8e-89c1-fb9d2a41f9b8> | CC-MAIN-2022-40 | https://www.inteliment.com/insights/role-of-ai-ml-in-combating-the-fake-news-menace/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00501.warc.gz | en | 0.949459 | 1,046 | 3.015625 | 3 |
Security Assertion Markup Language (SAML) is an open standard that enables software systems to share security credentials across networks. It allows one system to perform certain security functions, typically authentication and authorization, on behalf of one or multiple other systems. The authentication process determines the user’s identity, while the authorization process determines whether the user has access rights.
What Is SAML 2.0?
SAML 2.0 is the current version of the SAML standard, designed to facilitate the exchange of authentication and authorization identities between security domains. It enables web-based, cross-domain single sign-on (SSO) to help minimize the authentication tokens used per user. This XML-based protocol uses security tokens with assertions to pass information about a certain principal (typically an end-user). The protocol passes this information between a SAML authority (Identity Provider) and a SAML consumer (Service Provider).
SAML 2.0 builds on several established standards:
- Extensible Markup Language (XML)—SAML typically expresses exchanges in a standardized XML form. It is the basis for the name, Security Assertion Markup Language (SAML).
- XML Schema (XSD)—SAML assertions and protocols are specified (in part) using XML Schema.
- XML Encryption—SAML 2.0 uses XML Encryption to provide elements for encrypted attributes, name identifiers, and assertions. SAML 1.1 does not offer encryption. XML Encryption has significant security issues.
- XML Signature—SAML (1.1 and 2.0) uses XML Signature-based digital signatures to authenticate and maintain message integrity.
- Hypertext Transfer Protocol (HTTP)—SAML uses HTTP as the primary communications protocol.
- Simple Object Access Protocol (SOAP)—SAML uses SOAP 1.1.
SAML 2.0 was ratified as an OASIS Standard in 2005, replacing SAML 1.1. Many collaborators helped create it, including Liberty Alliance, which donated its Identity Federation Framework (ID-FF) specification to OASIS.
What Is SAML SSO?
SAML single sign-on (SSO) is a mechanism that allows users to log into multiple web applications after initially logging in to an identity provider. Users only need to log in once, providing a faster and smoother user experience.
From the user’s point of view, SAML SSO is simpler and more secure. Also, some applications will not require credentials at all (provided the user signed in to the identity provider), enabling easier access. Another benefit of SAML SSO is that the IT admin team only needs to manage one password per user, which reduces the need to handle password reset and other account-related requests.
The following diagram illustrates how SAML SSO works. When users attempt to access a website or application that requires authentication, they are redirected to the SSO service, which integrates with an identity provider. Each user provides one set of credentials for any app, and the SSO authenticates them with the central identity provider. Typically, the user receives an authentication token that allows them to continue accessing the application without logging in again, until the token expires.
How SAML Authentication Works: A Sample Workflow
SAML SSO transfers a user’s identity from the identity provider to the service provider by exchanging digitally-signed XML documents. Here is an example of how this process works:
- An end-user is logged into a system that serves as an identity provider. The user wants to log in to a remote application (the service provider).
- The end-user attempts to access the remote application via an intranet link or a bookmark, and the application loads.
- The application attempts to identify the end user’s origin via application subdomain, user IP address, or other methods.
- After the application identifies the origin, it redirects the end-user to the identity provider and asks for authentication.
- The end-user can use an active browser session or establish a new one by logging in to the identity provider.
- The identity provider builds an authentication response—an XML document that contains the end user’s email address or username. Next, the identity provider uses an X.509 certificate to sign the document and posts this information to the service provider.
- The service provider retrieves the authentication response and uses the certificate fingerprint to validate the response.
- The end user’s identity is now established. The user is granted access to the application.
SAML In Depth: Concepts and Components
SAML defines XML-based protocols, profiles, bindings, and assertions. SAML Core is the general SAML assertion semantics and syntax. It includes the protocol used for requesting and transmitting assertions between system entities. It defines bare assertions and elements of SAML requests and responses. SAML is the transmission content (“the what” rather than “the how”).
The binding determines the mechanism of transmission. SAML Core defines “bare” SAML assertions along with SAML request and response elements.
SAML providers are the systems that enable users to access the services they need. The two main types of SAML providers are identity providers (IdPs) and service providers (SPs). Identity providers authenticate end-users to verify their identity and forward the user identity data and access permissions to service providers. Service providers require authentication from an identity provider to authorize a user and grant access to a requested service.
SAML flows are triggered when users initiate SSO processes on their browser. The two types of flows supported – IdP-initiated and SP-initiated flows. IdP-initiated flows involve the identity provider authenticating and redirecting the user to the service provider and the SAML assertions. SP-initiated flows involve the service provider redirecting the user to the identity provider for authentication, after which the IdP redirects the user back to the SP.
A SAML assertion is a message telling the service provider that a user has signed in. It contains all the necessary information for the SP to confirm the user’s identity, including the assertion’s source, the time of issue, and the conditions for the assertion to be valid.
SAML assertions are akin to a job reference, which includes details such as when a candidate worked with the referee, in what capacity, and for how long. Companies evaluate job candidates based on such references, allowing them to hire confidently. Likewise, SaaS applications and cloud services refer to SAML assertion to grant or deny access to a user.
SAML protocols describe how SAML elements such as assertions are packaged in SAML requests and responses. They provide the processing rules for SAML entities to follow when consuming or producing the specified elements. SAML protocols mostly act as simple request and response protocols.
Queries are the most important SAML protocol requests—service providers make queries directly to an identity provider via secure back channels. Query messages are usually SOAP-bound. The three query types corresponding to the three SAML statement types are authentication queries, attribute queries, and authorization decision queries. Attribute queries result in SAML responses containing an assertion, which contains an attribute statement.
SAML 2.0 significantly expands the protocol concept. Its core describes several additional protocols, including the assertion query and request, authentication request, artifact resolution, name identifier management, single logout, and name identifier mapping protocols.
SAML bindings map SAML protocol messages onto standard communications protocols or messaging formats. For instance, the SAML SOAP binding defines how SOAP envelopes, bound to HTTP messages, encapsulate SAML messages.
SAML SOAP is the only binding specified in SAML 1.1. However, there are implicit precursors to other bindings in Web Browser SSO, including the HTTP POST, HTTP redirect, and HTTP artifact bindings. While not explicitly specified, these bindings are available when used with SAML 1.1 Web Browser SSO.
The binding concept is more advanced in SAML 2.0, with bindings separated from the underlying profile. SAML 2.0 offers a new binding specification, defining several standalone binding options, such as the SAML SOAP (similar to 1.1), Reverse SOAP (PAOS), HTTP redirect (GET), HTTP POST, HTTP artifact, and SAML URI bindings.
SAML 2.0 thus offers greater flexibility. For example, with SAML 2.0 Web Browser SSO, service providers have four binding options (HTTP POST, HTTP, redirect, and two types of HTTP artifact bindings). Identity providers have three options (HTTP POST and two types of HTTP artifact bindings). In total, the Web Browser SSO profile has twelve deployment options.
SAML profiles provide detailed descriptions of how SAML protocols, bindings, and assertions come together to support specific use cases. The Web Browser SSO profile is the most significant example.
In SAML 1.1, there are two forms of Web Browser SSO: the browser/POST and the browser/artifact profile. The first profile passes assertions based on value, and the second profile passes assertions based on reference (which requires backchannel SAML exchanges over SOAP). Each flow begins with an IdP request, and there are proposals for proprietary extensions to standard IdP-initiated flows.
In SAML 2.0, there is a fully refactored Web Browser SSO profile. SAML 2.0 profiles use a plug-and-play binding design, making them more flexible than their SAML 1.1 equivalents. Each SAML 2.0 browser flow begins with an SP request—this increases flexibility but creates an IdP discovery issue.
New profiles introduced in SAML 2.0 include:
- SSO profiles (Web Browser SSO, Enhanced Client or Proxy (ECP), Identity Provider Discovery, Single Logout, Name Identifier Management).
- Artifact resolution.
- Assertion query/request.
- Name identifier mapping.
- SAML attribute profiles.
SAML vs OAuth
Both SAML and OAuth are federated identity management protocols, whose development was driven by the growth of software-as-a-service (SaaS) applications, and the need to integrate authentication platforms for improved management and security. The key difference between SAML and OAauth is that SAML handles the authentication process while OAuth handles authorization. In other words, SAML verifies the user’s identity and OAuth verifies the user’s access rights.
Both SAML and OAuth serve the following use cases:
- Improving user experience—both SAML and OAuth allow users to access multiple applications with a single sign-in.
- Enhanced security—both SAML and OAuth enable IT admins to enforce SSO, strong passwords, and multi-factor authentication (MFA).
- Centralized management of identities—both SAML and OAuth allow IT administrators to integrate and centralize authentication and authorization processes. This simplifies the onboarding process for new users and reduces maintenance overhead for identity management.
Related: OAuth vs SAML
SAML vs LDAP
The Lightweight Directory Access Protocol (LDAP) is a lightweight software protocol that enables anyone on a network (whether public or private) to find data about organizations, individuals, and resources such as files and devices. LDAP and SAML SSO serve a similar purpose—helping users connect to IT resources. Both protocols are very widely used in the identity management industry.
Here are some of the key differences between LDAP and SAML SSO:
- LDAP is used for core directory services, SAML-based SSO enables centralized authentication for web applications.
- LDAP is mainly focused on local authentication, SAML extends user credentials to the cloud and other web applications.
- LDAP server implementations typically focus on trusted identity providers or identity sources. In SAML implementations, a SAML service is not always a trusted source of information, but acts as a proxy to a directory service and translates its authentication process into a SAML-based process.
SAML vs OpenID Connect (OIDC)
OIDC is an authentication protocol designed with web and mobile apps in mind. It’s designed to be easy to adopt and use, built as an extension of OAuth 2 that uses JSON formatted (JWT) data structures and a simple HTTPS transport flow.
It uses authentication tokens that are digitally signed and can be encrypted if needed. Traditionally SAML was the primary option for large enterprise and government identity verification. However, many large organizations are starting to adopt authentication systems based on OIDC.
Here are some of the key differences between OIDC and SAML:
- Because OIDC is a relatively new protocol, it is lagging behind SAML in functionality. For example, it does not fully support dynamic specification of proxy identity providers.
- OIDC is easier to use than SAML and requires less processing (it uses JSON tokens instead of XML). OIDC can provide superior performance in many use cases, especially for applications that have basic identity data requirements.
- OIDC is highly suited for mobile applications and single page web applications that are difficult to integrate with SAML.
SAML with Frontegg
With Frontegg, SAML authentication is available out of the box for any SAML provider. We took the SAML configuration experience to the next level by providing a complete self served drop-in Admin Portal for your end customers to configure the SAML on their own. From determining the SAML configuration, through the allowed domains all the way to self-served SAML claims mapping.
We believe that authentication and user experience must go hand in hand for best results. Want to check it out? Start for free now. | <urn:uuid:7d60dfd9-6adb-42b7-b071-a4554cf51c2d> | CC-MAIN-2022-40 | https://frontegg.com/guides/saml | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00501.warc.gz | en | 0.888381 | 2,899 | 2.71875 | 3 |
Introduction to Cryptanalysis
What is cryptanalysis?
Cryptography is the science of creating codes or keeping this secret. Cryptanalysis is the opposite: the attempt to break the code, gaining unauthorized access to the encrypted data.
Cryptanalysis is an important component of the process of creating strong cryptosystems. A reliance on “security via obscurity” (violating Kerckhoff’s Law) can result in the use of weak cryptosystems if the creators did not consider all possible attack vectors.
Instead, the cryptographic algorithms in common use today have been published for cryptanalytic review. The ones currently considered “trusted” and in common use are the ones for which an effective attack has not yet been discovered.
Simple cryptanalytic techniques
Modern cryptographic algorithms are designed to be resistant against all known cryptanalytic techniques. However, a few simple techniques can be useful for evaluating the security (and potentially breaking) older or amateur cryptosystems.
Entropy is the measure of the amount of randomness that exists within a system. A strong cryptographic algorithm should produce a ciphertext with very high randomness, indicating that there is little or no useful information linking the ciphertext to the original plaintext or secret key.
This makes entropy testing a useful tool for identification of encrypted data. While entropy can be calculated manually, tools like Binwalk and radare2 have built-in entropy testers that can be used to identify encrypted data within a file.
After encrypted data has been identified, other features can be used to help identify the encryption algorithms used. Some examples of useful information include:
- Ciphertext and block length
- Function names
If the encryption algorithm can be identified, it is possible to determine if it is a broken algorithm. Alternatively, knowledge of the algorithm can help in the search for an encryption key within a file.
Character frequency analysis
Unlike a good ciphertext, modern languages are anything but random. With sufficient knowledge of a language, it is often possible to guess which letter comes next after a given series. For example, in the English language, which letter almost always comes after the letter Q?
The lack of randomness in language is useful for cryptanalysis because it can make it easy to break weak ciphers. Character frequency analysis can easily break substitution and rotational ciphers.
The graph above shows the relative frequencies of letters in the English language. As shown, some letters (such as E, T and A) are much more common than others (such as Z, Q and J).
This is useful for analysis of substitution and rotation ciphers since the most common letter in the ciphertext is likely to map to E, the second most common is likely to map to T, etcetera, as long as the ciphertext is long enough. With a rotational cipher, a single correct match is enough to determine the step size and decrypt the message. With a substitution cipher, every pairing must be determined; however, knowledge of a few letters within a word makes it possible to guess the remainder.
Encoding vs. encryption
Encoding and encryption are both techniques for data obfuscation. However, their implementation and effects are very different.
Encryption requires a secret key for encryption and decryption. Without knowledge of this secret key, the plaintext cannot be retrieved from the ciphertext.
Encoding algorithms apply a reversible operation to data without using a secret key. This means that anyone with knowledge of the encoding algorithm can reverse it.
Encoding algorithms are commonly used in malware as a simple replacement for encryption. However, they are easily reversed if the encoding algorithm can be identified.
Base64 encoding is an encoding technique designed to make it possible to send any type of data over protocols limited to alphanumeric characters and symbols. This is accomplished by mapping sequences of three bytes to sets of four characters.
This mapping makes it possible to assign a sequence of six bits (four sets of six characters is twenty-four bits, which is the length of three bytes) to one of sixty-four printable characters, as shown in the table above. The base64 system uses padding so that an input that is not exactly a multiple of three bytes in length will result in an encoded version with one or two equal signs (=) at the end. The combination of the base64 character set and these option equal signs make this encoding style relatively easy to identify.
Base 64 encoding is used to make unprintable data printable, so a common use is to encode encrypted data. However, in some cases, encoding is used instead of encryption, making it easily reversible.
URL encoding is another example of an encoding style designed to allow data to be passed in a protocol with a constrained character set. In this case, URL encoding is intended to allow characters that are reserved in URLs, such as ? and /, to be included in a domain name or other parts of the URL.
As shown above, URL encoding uses a percent sign (%) followed by the ASCII representation of a value to replace that value. This eliminates the reserved character from the URL but enables it to be easily retrieved when needed.
URL encoding is intended to enable the use of reserved characters in URLs. However, it is commonly abused in injection attacks or as a simple layer of obfuscation since it defeats simple string matching.
Getting started with cryptanalysis
Most modern encryption algorithms are secure against known attacks, and many of the “broken” ones require knowledge of advanced mathematics to understand the attacks. However, many older encryption and encoding algorithms can be easily broken with simple techniques.
This is useful because many malware variants use these weaker forms of encryption. An understanding of basic cryptanalytic concepts and techniques can be very valuable in cybersecurity.
- English Letter Frequency Counts: Mayzner Revisited or ETAOIN SRHLDCU, norvig.com
- Encoding and Decoding Base64 Strings in Python, Stack Abuse
- URL Encoding, chrisrng.svbtle.com
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here. | <urn:uuid:ce3636d3-a9f4-419c-9081-b8d690393194> | CC-MAIN-2022-40 | https://resources.infosecinstitute.com/topic/introduction-to-cryptanalysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00501.warc.gz | en | 0.907171 | 1,303 | 3.4375 | 3 |
Differential Fault Attacks (DFA) disturbs the function of the smart card through physical means in order for the smart card to output faulty data. This faulty data can then be used to reveal the secret key of the smart card. Two types of DFA that can be used to break a smart card are glitching and optical fault induction attacks.
Glitching is an attack done long ago by hackers to break pay-TV smart cards. This method involves applying a glitch (a rapid transient) to the smart card’s clock or power source. The smart card’s processor can then be made to execute a number of incorrect instructions by varying the duration and precise timing of the glitch. This can cause the secret key to be outputted and checks of passwords and access rights to be skipped over. For example, the following loop is commonly used to output the contents of a limited range in memory to the serial port.
The aim of glitching attacks is to increment the program counter as usual but modify the conditional branch in line 3 or the decrement of variable a in line 6. The glitching attack can then be repeated such that the entire contents of the memory is outputted.
Optical Fault Induction Attack
An 0ptical fault induction attack uses a laser to change the state of a memory cell. By exposing an intense light source to CMOS logic, the semiconductor becomes ionized and can cause a new value to be written. The experiment carried out by Skorobogatov used a light from a magnified photoflash lamp to successfully change a bit in a SRAM chip. By manipulating the data in the smart card, faulty data can be outputted. This faulty data can then be used by the Chinese Remainder Theorem (CRT) to find the smart card’s secret key.
Finding the Secret Key Using CRT
Using the CRT to find the secret key of a public key cryptosystem was first discussed in. Devices using public key cryptosystems to generate signature may be attacked to inadvertently reveal their secret keys. This can be done if the following conditions are true: the message as signed is known, a certain type of faulty behavior occurs during signature generation and the device outputs the faulty signature.
Countermeasures to DFA
There are many ways to make smart cards more resistant to DFA’s. This can be done by changing the hardware of the smart card itself or the software ran on the smart card. In general, smart cards should have mechanisms that can prevent glitching attacks, detect errors during runtime or check the results of the computation before outputting the data.
January 21, 2020 -- Able Device -- GlobalPlatform, the standard for secure digital services and devices, alongside prime sponsor Oracle and supporting sponsors STMicroelectronics and Able Device, is hosting a free technical workshop in Nuremberg on Thursday February 27, 2020. The full day
Nov 14, 2019 -- ePasslet Suite v3 – cryptovision’s Java card framework for electronic ID documents – will be available in 2020 on SECORA™ ID, Infineon’s new Java card operating system. Using ePasslet Suite, users of SECORA™ ID can easily and flexibly
Munich, Germany – 14 November 2019 – Electronic identification documents (eID) are high in demand worldwide. To address the evolving needs of the market in a fast and flexible manner, Infineon Technologies AG (FSE: IFX / OTCQX: IFNNY) has
STMicroelectronics has released the next generation of its STPay system-on-chip (SoC) payment solution
October 2019 -- STMicroelectronics has released the next generation of its STPay system-on-chip (SoC) payment solution, leveraging state-of-the-art technology to increase contactless performance and protection, reduce power demand, and significantly improve the user experience. A sample of "STPay-Topaz-1", the first
IRVINE, CA, October 11, 2019 – CardLogix is preparing clients for the official release of NXP’s latest generation Java Card OS, JCOP 4, smart cards with custom applet development and existing applet support. CardLogix helps software developers create applets
Infineon OPTIGA™ Trust M improves the security and performance of cloud connected devices and services
SINGAPORE, Sept. 3, 2019 /PRNewswire/ -- Hardware-based trust anchors are key for connected applications and smart services, whether for a robotic arm in the smart factory or automated air conditioning in a private home. The new OPTIGA™ Trust M solution from | <urn:uuid:eb0c41fb-04ac-4cde-8419-08788a064d3f> | CC-MAIN-2022-40 | https://www.cardlogix.com/glossary/dfa-differential-frequency-based-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00501.warc.gz | en | 0.89295 | 970 | 3.140625 | 3 |
Plans for smart houses in the future are slowly becoming more and more plausible. A house that does all the manual labor for the occupants, where dinner is ready on the kitchen table and all the amenities in a house are included in these plans. Thanks to Baylor University’s School of Electrical and Computer Engineering and deep learning research, a future with smart houses is getting closer.
Listed among the research opportunities in the School of Electrical and Computer Engineering is deep learning. This research is helping Artificial Intelligence (AI) to develop into what is presented in science fiction novels and television shows.
Dr. Liang Dong , an associate professor of electrical and computer engineering, is leading this research. Dong has been with deep learning for about three years, but only brought this research into Baylor two years ago. The research is being funded not only by Baylor, but by Intel and a prospective new funder, the United States’ Department of Defense (DOD).
Intel is interested in the AI research going on in Dong’s deep learning research, while the DOD is interested in applying Deep Learning in combat.
“The computer teaches itself,” Dong said. “Deep learning is more to mimic a human brain.”
Through the use of algorithms and data, computers are able to compare results against many other previous studies. So far, the deep learning project is being tailored for the specific use of analyzing medical images like from positron emission tomography (PET) scans and computed tomography (CT) scans in hospitals. This would help to more accurately catch the development of cancer and other diseases. The research — conducted at the Baylor Research and Innovation Collaborative (BRIC) — is essentially split up into two categories.
The theoretical research is composed of distributed deep learning and energy-efficient deep learning. Distributed deep learning deals with investigating how to use several local machines to compute different parts of the main neural network. It solves the problem of the large amount of time it takes to train a deep neural network in a single machine. Energy-efficient deep learning focuses on the problem of being able to provide a constant source of energy for necessary continuous projects. […] | <urn:uuid:f34b4bc8-6bc6-4232-893e-88dd9256bc70> | CC-MAIN-2022-40 | https://swisscognitive.ch/2018/03/29/deep-learning-artificial-intelligence-leading-the-way-to-smart-houses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00501.warc.gz | en | 0.956942 | 439 | 2.890625 | 3 |
Even though this technology has been around for some time now, this is still a question I am frequently asked and find to be confusing for many. What exactly is document management, and what are the key characteristics of a document management solution? For purposes of this discussion, the term document is defined as "recorded information or an object which can be treated as a unit." Document management, often referred to as document management systems (DMS) or, more frequently these days, as electronic document management (EDM), is the use of a computer system and software to store, manage and track electronic documents, electronic images of paper-based information captured through the use of a document scanner and even digital photos or audio and video capture through various devices that may include smartphones and tablets.
EDM systems allow documents to be modified and manage the subsequent revisions of these documents. Most document management tools allow metadata to be associated with the document and integrate a search tool, thus, making the document more aware or intelligent, as well as easier to retrieve. Many EDM systems, however, lack the full functionality for managing records or meeting the needs of managing content holistically. One way to look at this is to think of EDM capabilities as essentially the entry point into managing documents. This functionality, expanded into larger contexts, becomes the basis of the management capabilities of enterprise content management (ECM), which moves to the management of the life cycle of content, from initial creation, to delivery, re-use, declaration as a record and to archive and/or destruction. Key document management features are:
- Check in/Check out and locking
- Version control
- Roll back
- Audit trail
- Annotation and stamps
EDM, while capable of supporting electronic distribution as well, is predominately thought of as handling paper-based information after capture and digitally born documents, like word processing files, spreadsheets and presentations, thus, making them available to the community at large. Along these lines is a particular type of document management known as compound document management where you gain the benefits of componentised or template documents, enabling and supporting the concepts of repurposing information rather than rewriting every time.
Document management systems today range in size and scope, from small standalone systems to large-scale enterprise-wide configurations serving a global audience, as well as cloud-based solutions. Many document management systems provide a means to incorporate standard physical document filing practices electronically, allowing you to better manage not only your digital content but also your physical information. These include:
- Storage location
- Security and access control
- Version control
- Audit trails
- Check-in/check-out and document lock down
Document management, while still recognized and utilized independently, is also a common component found within an enterprise content management (ECM) environment. If you consider that the makeup of an ECM environment includes EDM as part of the technology element, you will many times also find imaging, workflow and other technologies and functionality as part of the complete environment. Remember, in an ECM environment, technology is a tool and not the total solution. As such, the EDM will likely also be tied into or integrated with line of business applications in order to maximize value and efficiency of the organization.
In today's business organization, mobile is also a driving force in establishing a controlled and well-managed EDM solution where the user community can capture information and access that information using smartphones and tablets from any location at any time.
In my view, EDM is one of those technologies that not only helps an organization organize and maintain control over its content, it is one that provides strength and defensible ways to minimize risk due to the version control and audit trail capabilities. The fact that a document can be locked down when in edit mode, versions maintained and an audit trail provided that summarizes what happened to a document, who did it and when they did it is essential in today's business world. Regulatory guidelines, litigation and audits all require that content, like records, be managed properly. There should be strategies dealing with life cycle, content value, security requirements and even disposition. If content has no value and you are holding on to it for no legitimate reason, get rid of it. Documents you have in hand are documents that can be discovered and potentially place you at risk.
What is EDM? It is a tool that will help you establish and maintain strong practices and tight control over your documents, content and information of all types. What you need to do is assess the business problem to solve and develop strategies that include EDM as one of the elements that will make you successful.
BOB LARRIVEE is an internationally recognized thought leader with over 30 years of experience in document imaging, content management, records management, the application of advanced technologies and process improvement. He is director of the AIIM Learning Center where he works to identify, develop and deliver specialized training in best practices, technology and methodologies. Mr. Larrivee can be reached at firstname.lastname@example.org. | <urn:uuid:c8839ed5-e095-4ebf-9e8c-cb09f0912f6b> | CC-MAIN-2022-40 | https://documentmedia.com/article-572-What-Is-Document-Management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00501.warc.gz | en | 0.943339 | 1,036 | 2.515625 | 3 |
QUIC stands for “Quick UDP Internet Connections” and is, itself, Google’s attempt at rewriting the TCP protocol as an improved technology that combines HTTP/2, TCP, UDP, and TLS (for encryption), among many other things.
The HTTP-over-QUIC experimental protocol will be renamed to HTTP/3 and is expected to become the third official version of the HTTP protocol, officials at the Internet Engineering Task Force (IETF) have revealed.
This will become the second Google-developed experimental technology to become an official HTTP protocol upgrade after Google’s SPDY technology became the base of HTTP/2.
HTTP-over-QUIC is a rewrite of the HTTP protocol that uses Google’s QUIC instead of TCP (Transmission Control Protocol) as its base technology. Google wants QUIC to slowly replace both TCP and UDP as the new protocol of choice for moving binary data across the Internet, and for good reasons, as test have proven that QUIC is both faster and more secure because of its encrypted-by-default implementation (current HTTP-over-QUIC protocol draft uses the newly released TLS 1.3 protocol).
QUIC was proposed as a draft standard at the IETF in 2015, and HTTP-over-QUIC, a re-write of HTTP on top of QUIC instead of TCP, was proposed a year later, in July 2016.
Since then, HTTP-over-QUIC support was added inside Chrome 29 and Opera 16, but also in LiteSpeed web servers. While initially, only Google’s servers supported HTTP-over-QUIC connections, this year, Facebook also started adopting the technology. Read more | <urn:uuid:eed090f5-9b27-4e66-8fe1-d216a67a7488> | CC-MAIN-2022-40 | https://domainnewsafrica.com/http-over-quic-experimental-protocol-will-be-renamed-to-http-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00501.warc.gz | en | 0.936771 | 347 | 2.703125 | 3 |
1 - Drawing Complex Illustrations
Apply the Grid, Guides, and Info Panel Combine Objects to Create Complex Illustrations Organize Artwork with Layers Create a Perspective Drawing Trace Artwork
2 - Enhancing Artwork Using Painting Tools
Paint Objects Using Fills and Strokes Paint Objects Using Live Paint Groups Paint with Custom Brushes Add Transparency and Blending Modes Apply Meshes to Objects Apply Patterns
3 - Customizing Colors and Swatches
Manage Colors Customize Swatches Manage Color Groups Adjust Color
4 - Formatting Type
Set Character Formats Apply Advanced Formatting Options to Type
5 - Enhancing the Appearance of Artwork
Apply Effects to an Object Create Graphic Styles Apply a Mask to an Object Apply Symbols and Symbol Sets
6 - Preparing Content for Deployment
Prepare Artwork for Printing Prepare Transparency and Colors for Printing Create Slices and Image Maps Save Graphics for the Web Prepare Documents for Video Prepare Files for Other Applications
7 - Setting Project Requirements
Identify the Purpose, Audience, and Audience Needs Determine and Evaluate Standard Copyright Rules for Artwork, Graphics, and Graphics Use Determine and Evaluate Project Management Tasks and Responsibilities
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is intended for designers, publishers, pre-press professionals, marketing communications professionals, or people taking on design responsibilities who need to use Illustrator to create illustrations, logos, advertisements, or other graphic documents.
To ensure your success in this course, you should be familiar with basic computer functions such as creating folders, launching programs, and working with Windows. You should also have basic Windows application skills, such as copying and pasting objects, formatting text, and saving files.
Familiarity with basic design terminology, such as palettes, color modes, shapes, text, and paths, is highly recommended. | <urn:uuid:b44b7341-fd4b-4100-9a48-3df6b95dbb20> | CC-MAIN-2022-40 | https://nhlearninggroup.com/training-and-certifications/course-outline/id/1000401013/c/adobe-illustrator-cc-part-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00701.warc.gz | en | 0.808482 | 414 | 3.421875 | 3 |
Scientists are using a fluid called “bio ink” to 3D print synthetic tissue directly into the body, using a programmable nozzle.
The ink makes a framework for living cells, with the machine dispensing the bio ink into the body “much like an icing tube squeezes out gel, only in a highly precise, programmable manner” said a statement released by the Terasaki Institute where the ink was created.
To 3D Print Fake Tissue
The statement released by the institute went on to clarify:
“Such improvements in tissue engineering are instrumental in providing lower-risk, minimally-invasive laparoscopic options for procedures such as the repair of tissue or organ defects, engineering/implanting patches to enhance ovarian function, or creating bio-functional hernia repair meshes. Such options would be safer for the patient, save time and be more cost-effective”.
The CEO of the Terasaki Institute and co-creator of the project Ali Khademhosseini explained the invention further:
“Developing personalized tissues that can address various injuries and ailments is very important for the future of medicine.
“The work presented here addresses an important challenge in making these tissues, as it enables us to deliver the right cells and materials directly to the defect in the operating room”.
The Terasaki Institute for Biomedical innovation is a non-profit organisation specialising in innovative, personalised medicine.
Other projects pioneered by the institute include experimenting with microneedling therapeutic stem cells into damaged tissues and extracting skin samples using “interstitial fluid” and microneedles.
3D Printing in the Medicine
3D printing has been making strides in the medical industry this year outside of the Terasaki Institute.
Based in Europe, patient specific inplant company Xilloc have been creating 3D printed bone implants that are based on the CT scan of the patient. Developed with a bone-like 3D printing material with a compound found in natural bone called calcium phosphate, the bone implant merges with the natural bone.
Northwestern University in Illinois have released a pliable 3D printing material for medical uses called hypereastic bone. This material is very malleable and easy to implant during surgery.
The material is a combination of hydroxyapatite, which lends rigidity to real bone and a specific polymer that makes the hydroxyapatite very flexible, porous and absorbent, allowing it to act as a scaffold for new blood vessles and cells, as reported by All3DP. | <urn:uuid:a8d83deb-48fa-4e3b-8f4c-1c32573e4ae1> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/emerging-technology/3d-print-fake-tissue | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00701.warc.gz | en | 0.927089 | 530 | 3.046875 | 3 |
Password security is important. Salting and stretching a password are two ways to make passwords more secure from attackers. You can use the strategies separately or deploy them in tandem for the most security.
This article covers the logic of password protection, including salting and stretching passwords.
Because passwords are a user’s key to their data, they are a key target for attackers.
Popular methods of password attacks happen through brute force and rainbow attacks:
- Brute force attacks are a trial and error approach where a computer guesses until it gets it right. This may sound ineffective, but computers can try many, many times. In this era of computing, “Hashcat breaks an 8 character full coverage (a-zA-Z0-9!-=) password in 26 days on a single 1080 Nvidia GPU.”
- Rainbow attacks are another form of cracking passwords, where all possible combinations of hashed passwords are pre-computed and stored in a dictionary. Then, an attacker runs through their list of hashes to find a match. If their scan returns a match, then they have the password.
Passwords help prevent attacks
First things first: nothing online is perfectly secure. Not even computers not connected to the internet are perfectly secure. We use passwords to minimize risk of attack, not to guarantee it will never happen.
Though you cannot guarantee security, there are some ways to increase database security. Salting and stretching passwords are two such strategies:
- Salting passwords. Designing the password encryption so only one password is compromised rather than the whole database. Attackers will attack. Don’t make it easy for them to run off with the whole loot at once.
- Stretching passwords. Lengthening the password (on the database side) so the time it takes to crack the password becomes too expensive for attackers. The idea is that attackers will opt for easier targets—makes common sense, similar to the rumor that crime rates fall proportionately to how high one travels up the hills in San Francisco.
Ways to store passwords
To understand password salting and stretching, let’s look at ways companies can store their data.
It is critical to note: Responsibility does not fall on the company’s shoulders if an individual user compromises their own password. A company can encourage a user to use stronger passwords by enforcing character limits and special character sequences. A company, however, cannot control if a user allows someone walking by to see their password.
A company’s responsibility is to secure their stored passwords.
Direct password storage
Storing the password as-is is the worst possible way to store passwords in a database. If a person with no computer received the list, they can read the whole list of passwords as they are.
Simple hash function
A better, but far from perfect, option is to apply a hash function to the password and store the hash value in the database. This is an added step between the phrase the user inputs and the phase (hash) that ultimately gets stored in the database.
Here’s how it works: If an attacker were to receive or obtain the list of passwords (the right-most column, above), but the passwords are hashed, the values are unreadable. Therefore, a computer would have to figure out what function was used to turn those values into the original password.
Hashing sounds good, but it is an all-or-nothing proposition: If an attacker were to crack the hash function, then the hacker could read all the passwords in the database.
Salting a password
This is where salting comes in. A salt adds a string of characters to the user’s passwords to just before the password undergoes hashing. The salt accomplishes two things:
Attackers cannot do a dictionary lookup to see how popular passwords get hashed. Because there is a random string of values added to the password, passwords no longer exist as “popular strings” and are more random. Their complexity has increased greatly.
A unique salt per user prevents an attacker from guessing the hash function and unlocking an entire database of passwords. The added step between the password and the hash function makes it so if an attacker gets the hash function figured out, they still have to run through many more combinations to guess the unique salt value.
Stretching a password
Finally, a tried-and-true method to frustrate attackers is stretching passwords before they’re saved to the database. The primary aim of stretching a password is to make deciphering the password more costly—whether with memory, time, or money—than an attacker can afford.
In stretching, the strength of a password is measured by its bits of key strength. Methods of lengthening the number of bits a password has comes down to the hash function. Usually, hash functions are looped thousands of times, simulating randomness and adding more and more bits to the complexity of a password passed to the database.
How to stretch passwords
For the developer, the goal of password stretching software is to increase computational time on the attacker’s system. Stretching maximizes the difficulty an attacker may have to decrypt the data, while still maintaining usability of the application itself.
When a password is encrypted, the user has to wait for their password to run through the hoops and get verified against their actual password. If it takes the user’s computer 3 minutes to hash their password and check it against the database, that might be unreasonable. But if the user’s password is submitted and verified with the database in a few milliseconds, then there could be room for improvement.
For companies, perhaps the best improvement is to limit the instances of user passwords.
For more on cybersecurity topics and practices, browse our BMC Security & Compliance Blog or check out these articles:
- Cybersecurity: A Beginner’s Guide
- What Is a Cyber Resilience Strategy?
- Introduction to Enterprise Security
- 5 Examples of Recent Data Breaches | <urn:uuid:8a272c1e-3cc2-4261-9be0-2d4c314a8d2d> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/salting-stretching-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00701.warc.gz | en | 0.928311 | 1,242 | 3.4375 | 3 |
Today’s secure environment will have vulnerabilities in it tomorrow, so an organization cannot allow itself to become complacent. In this course, we’ll introduce you to the 8 phases of threat intelligence.
In today’s cybersecurity landscape, it isn't possible to prevent every attack. Today’s attackers have significant funding, are patient, sophisticated, and target vulnerabilities in people and processes as well as technologies. With organizations increasingly relying on digitized information and sharing vast amounts of data across the globe, they have become easier targets for many different forms of attack.
As a result, every company’s day-to-day operations, data, and intellectual property are seriously at risk. In a corporate context, a cyber attack can not only damage your brand and reputation, but it can also result in the loss of competitive advantage, create legal/regulatory non-compliance, and cause steep financial damage.
Cyber threat intelligence (CTI) is an advanced process enabling organizations to gather valuable insights based on the analysis of contextual and situational risks. These processes can be tailored to the organization’s specific threat landscape, industry, and market.
This intelligence can make a significant difference to organizations' abilities to anticipate breaches before they occur. Giving organizations the ability to respond quickly, decisively, and effectively to confirmed breaches allows them to proactively maneuver defense mechanisms into place, prior to and during the attack.
In this course, we’ll introduce you to the 8 phases of threat intelligence:
Hunting - The goal of hunting is to establish techniques to collect samples from different sources that help to start profiling malicious threat actors.
Features Extraction - The goal of Features Extraction is to identify unique Static features in the binaries that help to classify them into a specific malicious group.
Behavior Extraction - The goal of Behavior Extraction is to identify unique Dynamic features in the binaries that help to classify them into a specific malicious group.
Clustering and Correlation - The goal of Clustering and Correlation is to classify malware based on Features and Behavior extracted and correlate the information to understand the attack flow.
Threat Actor Attribution - The goal of Threat Actors is to locate the threat actors behind the malicious clusters identified.
Tracking - The goal of tracking is to anticipate new attacks and identify new variants proactively.
Taking Down - The goal of Taking down is to Dismantled Organized Crime Operations.
Course Duration & Access
|2||200+ Hands-on Exercises|
|3||400+ HD Videos|
|4||20+ Hours of Content|
|5||Watch Video from Android & iOS Apps|
|6||Life Time Access Content|
|7||24/7 Live Technical support|
|8||Complete Practical Training|
|10||Guidance to Setup the Own Lab|
Benefits of Enrolling with Ethical hackers Academy?
With the Ethical hackers academy you will get expertise training and learn about a real-world cyber-attack, prevention, analyzing the cyber threat, break down the attack vectors, and Complete Practical Training.
All the courses are created by subject matter experts and real world practitioners who is having more than 10 years real world experience.
Is there any limit?
With all of our courses you will get lifetime access and there is no restriction or video limits. You have full freedom to learn whenever you like.
What are the Course available?
We are all the cyber security & Ethical hacking courses covering all the domains starting from Networking, Malware analysis, Python, Read team Certification, Bug bounty, IoT and more.
How often the content will be added?
We keep on updating courses and we add new courses at regular intervals.
How can I access the courses enrolled?
After enrollment you will get access to the courses within 3 minutes, sometimes for bundle courses there be a slight delay. You can access login portal from here.
What can I do if have doubts?
If you have any question within the course you can reach the instructors using a message button with your learning management portal. For other assistance you can contact our live chat support 24/7.
Do you have any Android or iOS apps?
Can I get refunds after enrollment?
Yes you can get refunds after course enrollment, here you can find more details.
Are there any minimum system requirements to access Learning portal?
We support Chrome, Firefox and IE on Windows, Mac, Linux desktops, Android and iOS apps.
What is a Mode of Training?
Training mode is self-placed online training with 24/7 learning support. We are not providing any offline training.
Payment & Security
Your payment information is processed securely. We do not store credit card details nor have access to your credit card information. | <urn:uuid:62143931-f8dd-4d1a-b172-293b2068b3b4> | CC-MAIN-2022-40 | https://ethicalhackersacademy.com/collections/ethical-hackers-academy/products/certified-cyber-threat-intelligence-analyst | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00701.warc.gz | en | 0.902762 | 1,022 | 2.578125 | 3 |
You’ve seen it in movies and heard it from everyone, but do you really know is the what the speed of light?
What is the speed of light?
An astronomer had once determined that light moves at a speed of 301.000 kilometers per second. Earth’s rotation around the Sun was the basis of his calculation.
Since then, there have been several theories, and it’s concluded that the speed of light differs when traveling through different objects. For example, light travels at its maximum speed in space, but it drops a little when traveling through air. When in diamond, the speed of light is at its lowest.
How fast is the speed of dark?
The energy present in a dark atmosphere is about 80 percent of all matter in the universe. Years back, astronomers determined the speed of dark matter is about 54 meters per second, slower than the speed of light.
In how long does light travel from the Sun to Earth?
Because the Sun is about 150 million kilometers away from Earth, it takes about 8 minutes and 19 seconds for the light to travel from the Sun to Earth. It might seem like a lot of time, but in fact, when compared to the fact that it takes almost 40,000 years for photons to move from the Sun’s center until its surface.
At Iluminar, our co-owners Eddie Reynolds and Joni Hamasaki, are the light experts with over 30 years of industry experience. If you are looking for any lighting solution, we look forward to providing you the best service. To know more, contact us at 281-438-3500 . | <urn:uuid:bf66e07c-d327-4a8d-a0e2-b7948e42967b> | CC-MAIN-2022-40 | https://www.iluminarinc.com/what-is-the-speed-of-light/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00701.warc.gz | en | 0.95709 | 334 | 3.40625 | 3 |
The terms Trojan horse or Trojan in the computer environment stand for a computer program that masquerades as a useful application but has hidden functions unknown to the user. These are executed in the background without the user’s knowledge and can be malicious in nature. Trojans are categorized as malware and unwanted software, although the hidden functions may not always be harmful.
What is a Trojan Horse?
The Trojan differs from a virus in that it acts as a host for the actual malicious code and can, in principle, inject any type of code. A Trojan horse does not have the mechanisms that contribute to self-propagation.
Trojans are often installed by users themselves, under the assumption that it is a normal application. Once installed, the Trojan horse opens backdoors to reload malicious code or runs malicious programs such as keyloggers. The so-called “federal Trojan” is a Trojan intended to be used for online searches in cases of serious crime in law enforcement or security.
The origin of the name Trojan Horse.
The name is derived from the mythological tale of the Trojan War. Here, a wooden horse is said to have been used that the besieged inhabitants of Troy voluntarily let into the city, and inside it were fighters of the Greek besiegers. By this ruse it was possible to capture the city.
Possible harmful functions of a Trojan
A Trojan can host any harmful or harmless functions. For example, Trojans with malicious code perform the following hidden functions:
- Opening a backdoor on the computer to give access to hackers
- Stealing data
- Loading additional malicious software
- Taking control of the computer by a hacker
- Integrating the computer into a botnet
- Execution of DDoS (Distributed Denial of Service) attacks
- Recording user input (keylogger)
- Reading data traffic
- Spying on user IDs and passwords
- The deactivation of anti-virus programs or the firewall
- The installation of dialer programs
- The display of unwanted advertising
- Encrypting data and extorting a ransom (ransomware)
- Using computer resources for other purposes, such as mining digital currencies.
Protective measures against Trojans
To protect against Trojans, users must be sensitized. They should never install software unknown to them or programs from dubious sources on the computer, even if they claim to perform useful functions. Downloading software on the Internet should only be done from trusted sites. In addition, the usual measures against malware must be taken, such as the use of up-to-date antivirus programs and firewalls.
Regular updates and patches must be applied to the operating systems and applications used. Caution is also advised with e-mail attachments containing executable files (.exe files). Malicious programs or Trojans may also be hidden in other file types. | <urn:uuid:6efeef69-0a3e-4c7b-9f5b-b6b354b6a2e4> | CC-MAIN-2022-40 | https://informationsecurityasia.com/what-is-a-trojan-horse/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00701.warc.gz | en | 0.916524 | 594 | 3.625 | 4 |
What is privileged access management?
Privileged access management, or PAM, is a set of cybersecurity policies, principles, and tools that allow you to manage the access rights of privileged users. By applying a privileged access management solution, you can configure which users can access which resources, protect sensitive data from unauthorized access, and detect and prevent security incidents.
Why is privileged access management important?
Managing privileged access can help you prevent cybersecurity risks like data theft, leaks and abuse, corporate espionage, and credential theft. Deploying a PAM toolset is also important for organizations that need to comply with cybersecurity laws, standards, and regulations like HIPAA, PCI DSS, and NIST SP 800-53. Protecting user access is an essential part of compliance.
What is the difference between privileged access management and identity and access management (IAM)?
While both PAM and IAM help to manage user access, they have a lot of differences:
1. PAM controls only privileged accounts, while IAM can be applied to all accounts.
2. PAM guarantees users will access only the resources to which they have access rights. IAM ensures that only the right people log in to an organization’s systems.
3. IAM verifies users’ identities before providing access to an organization’s resources. PAM checks users’ credentials before providing access.
Achieving regulatory compliance often requires organizations to deploy both PAM and IAM tools to ensure the best possible protection. Ekran System provides you with PAM and IAM tools in one solution.
Why use privileged access management software?
The main goal of using a PAM system is to protect an organization’s sensitive data from unauthorized access. PAM helps you make sure privileged users access only the resources they need for work purposes. Also, it alerts security officers if users do something suspicious with sensitive data.
PAM is useful for protecting both from insider threats like data theft and corporate espionage and from outside attacks like hacking, credential theft, and social engineering.
What are the benefits of privileged access management?
By deploying a PAM solution, an organization:
1. Protects sensitive data it stores
2. Mitigates possible insider and outsider threats
3. Prevents privileged users from violating security policies without affecting their productivity
4. Enhances compliance with cybersecurity requirements
Is Ekran System’s PAM solution a standalone tool?
Can I use the Ekran System PAM solution to manage access privileges of remote employees?
With Ekran System’s PAM solution, you can manage remote access privileges as easily as you can manage access privileges of in-house employees. You can configure access rights for remote users, manage their credentials and secrets, audit their activity, monitor access requests and interactions with sensitive data, etc.
How can I configure access rights for each privileged user in my system?
Ekran System is highly flexible in terms of configuring access rights. For example, it allows you to:
1. Create unique access configuration for a user
2. Configure user roles and assign those roles to groups of users
3. Allow access to sensitive resources for a certain period of time
4. And do even more
Our PAM solution is easy to customize. Also, our support team is always ready to help with customizations and any other questions.
How does Ekran System help to implement the principle of least privilege and the just-in-time (JIT) approach?
You can implement the principle of least privilege using the following capabilities of Ekran System:
1. Granularly configure access rights for privileged users to allow them to interact only with the resources they need
2. Reconfigure users’ access rights at any moment in a couple of clicks
3. Provide access to the most sensitive resources for a set period of time
To implement JIT, you can also use these privileged access management features:
1. One-time passwords that provide users with access only when they need it and for a limited period of time
2. Manual access approval, which is useful for controlling access to the most secured resources
Which fail-safe mechanisms does Ekran System use?
Ekran System supports a high availability mode based on a Microsoft failover cluster. It’s designed in such a way that if the Ekran System server stops working, another server instance can replace it without data loss or reinstallation. To enhance availability, you can create a load balancer cluster for the AppServer or deploy an MS SQL cluster.
How do you protect the Ekran System password vault?
Ekran System encrypts privileged user credentials and other secrets with the Advanced Encryption Standard (AES) 256. These secrets are stored in an SQL database, which can be located on a separate machine.
We also use encryption to protect initial vectors for time-based one-time passwords, monitoring records, exported forensic data, and passwords of internal Ekran System users. You can learn more about Ekran System encryption mechanisms in our documentation.
Can I get help with deploying, configuring, and maintaining Ekran System?
We’ve prepared step-by-step guides for deploying Ekran System in the form of agents or jump server instances. The documentation also contains instructions on how to configure Ekran System components.
If you have any additional questions about our privileged access management tools, feel free to contact our support team. | <urn:uuid:2bfc88b7-9db7-4ff5-b5a4-e2f99a2a9589> | CC-MAIN-2022-40 | https://www.ekransystem.com/en/product/privileged-access-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00701.warc.gz | en | 0.886736 | 1,125 | 2.65625 | 3 |
In his 1983 book, Modern times: a history of the world from the 1920s to the year 2000, historian Paul Johnson claims the modern world started on 29 May 1919, when astronomers studying a solar eclipse were the first to test and prove Einstein’s theory of relativity. As a result, writes Johnson, “The belief began to circulate… that there were no longer any absolutes: of time and space, of good and evil, of knowledge, above all of value”.
A century of breaking down social, intellectual and scientific absolutes later, we label this battle between binary opposites and relative values by using phrases such as “culture war”. Arguably, the biggest social and cultural influence in the world today is an industry built entirely around binary logic that is being used as a tool to globalise those culture wars and increasingly to try to divide society into two extremes. It’s as if those “culture warriors” are trying to remake the absolute world, by using the digital revolution to divide relative values back into two opposites.
Those of us in the technology community, therefore, sit in the midst of a conflict between our binary heritage and a relativistic world we partly helped to create. As such, we have a responsibility to avoid letting the increasingly complex discussions about the impact of the digital revolution become one of comparing extremes. Our binary underpinning has shown, through the huge diversity of the way technology is used, that this is a world composed of shades of grey, not simply of black and white.
We see examples of this every day. “Is social media a good thing or a bad thing?” scream the headlines. It’s the same with artificial intelligence, with big data, with the impact of mobile phones on children. In Einstein’s shadow, technology has enabled almost infinite nuance, yet it is so often reduced to existing on opposing ends of what is in reality a spectrum.
One of the latest examples is around the use of our personal medical records by the NHS. We’re facing another poorly communicated “data grab” – a repeat of the 2014/15 Care.data fiasco. Should we opt in or opt out of our health data being gathered? Note – only two choices offered. But surely you want better health research and to save lives? Well, yes of course. But…
The beauty of modern technology is the way it makes the most complex things, simple. But we must avoid making complex arguments about the role of technology, overtly simple. The binary world bears an enormous responsibility when it comes to sustaining the theories of relativity. | <urn:uuid:67e2b187-54e5-409d-8f3c-b08602715cc9> | CC-MAIN-2022-40 | https://www.computerweekly.com/blog/Computer-Weekly-Editors-Blog/The-dangers-of-binary-arguments-in-a-complex-and-relative-digital-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00101.warc.gz | en | 0.931598 | 541 | 2.53125 | 3 |
Most network folks know how traceroute works: by manipulating a feature in the IPv4 header called the Time To Live – the most threateningly-named portion of the TCP/IP stack! The only way it could be worse is if they’d called it the “Time To Die”.
The TTL value in the IPv4 header (or the Hop Limit value in the IPv6 header) goes down by 1 as it passes from one router to the next. If the TTL ever reaches zero then the device that sets the TTL to zero will (hopefully!) send an ICMP “Time Exceeded: TTL expired in transit” message back to the source.
Traceroute takes advantage of this: it starts by sending a packet with a TTL of 1, and as such our immediate next-hop is forced to reply, because it will set the TTL to zero. Voila: by forcing it to reply, we’ve just discovered our first hop in the chain! We then send another packet, this time with a TTL of 2. Our immediate next-hop receives the packet, sets the TTL to 1, passes it to the second router in the chain, who sets the TTL to zero, and then sends it own TTL expired message. Voila: we now know the second hop in the chain! Gosh, it’s almost like magic!
It’s important to be aware that the routers in the path don’t *have* to respond. To quote RFC 792, which is the RFC for ICMP: “If the gateway processing a datagram finds the time to live field is zero it must discard the datagram. The gateway may also notify the source host via the time exceeded message.” Notice the words “must” and “may” in that quote. Don’t be fooled if you don’t get a response from a hop in the chain of a traceroute: it could just be that the device has been set up to not respond.
You could argue it’s a good security practice to not respond, because it stops people working out what devices are on a network. An alternative view is that if you turn this stuff off, you make it harder to troubleshoot. You will of course have your own opinions on this, and whatever they are, I’m sure that you’re correct, and everyone else is wrong. Well done to you for being so clever!
There’s a chance you knew everything I’ve said so far already. Here’s something you might not have known though. We’ve talked so far about how devices send out a “packet” when performing a traceroute. But what does that mean? What’s actually in that packet? Many people assume it’s just a ping. And on Windows this is indeed the case: traceroutes from Windows machines send out an ICMP Echo Request.
But did you know that on Linux and Unix machines (like MacOS), the default is to actually send a UDP packet on ports 33434 to 33534? Cisco also use UDP 33434, but the port goes up with each hop. As for Juniper, it’s FreeBSD under the hood, so it too uses UDP.
Why does this matter? Because it can have an impact on the results! If a device is in front of a firewall, or if the device itself has any kind of firewall filtering functionality, then it could feasibly be allowing ICMP but denying whatever UDP port the traceroute traffic comes in on. And this is exactly the scenario a customer of ours reported recently.
Below I’m doing a traceroute to a real-world IP address. However, I’ve taken the real-world IPs out of the traceroute output, for obvious reasons, and I’ve replaced the final three hops with the obviously fake and not-legitimate IPs address of 444.444.444.444, 555.555.555.555, and 666.666.666.666. Three addresses guaranteed to not exist in the real world!
Here’s what a traceroute to our alleged 666.666.666.666 address looks like by default from my Macbook. The traceroute is going out as a UDP packet, and the very final hop doesn’t reply.
Chris-Parkers-MacBook-2:~ chrisparker$ traceroute 666.666.666.666 traceroute to 666.666.666.666 (666.666.666.666), 64 hops max, 52 byte packets 1 10.8.159.254 (10.8.159.254) 1.487 ms 1.100 ms 1.126 ms 2 184.108.40.206 (220.127.116.11) 1.502 ms 1.520 ms 1.368 ms 3 18.104.22.168 (22.214.171.124) 2.176 ms 2.083 ms 2.080 ms 4 126.96.36.199 (188.8.131.52) 3.725 ms 3.752 ms 3.705 ms 5 184.108.40.206 (220.127.116.11) 3.516 ms 3.581 ms 5.666 ms 6 ge-2-1-0.mpr1.lhr2.uk.above.net (18.104.22.168) 8.533 ms 5.862 ms 5.485 ms 7 ae13.mpr3.lhr3.uk.zip.zayo.com (22.214.171.124) 3.944 ms 4.163 ms 4.050 ms 8 4444.4444.4444.uk.zip.zayo.com (444.444.444.444) 4.635 ms 4.481 ms 4.311 ms 9 555.555.555.555.ipyx-xxxxxx-zyo.above.net (555.555.555.555) 5.074 ms 5.224 ms 4.614 ms 10 * * * 11 * * * 12 * * * 13 *^C Chris-Parkers-MacBook-2:~ chrisparker$
You wouldn’t know it from the traceroute above, but I can tell you for fact that the final hop is up, and live – but isn’t accepting UDP messages. As such, the end device never even gets to a stage where it can process the packet and set the TTL from 1 to 0. And so, no ICMP TTL exceeded message is sent back to the source.
Notice how the traceroute carries on past line 10, which is where the end destination should live. Why is this? Our source machine doesn’t realise it’s already reached the end destination, so it sends out yet another set of three packets, this time with the TTL set one higher than before. By the time it reaches our end destination, the TTL will actually be 2 – but it doesn’t matter, because this packet is still being dropped. It’s UDP, and just like the last packet it isn’t allowed.
Similarly, the packet at line 12 would have had a TTL of 3. Our plucky source machine just assumes that if a hop in the chain doesn’t respond, there could well be other stuff beyond it. Our source device doesn’t know that it is in fact at the end destination. Bless!
Now let’s see what happens when I use the -I (capital i, not lowercase L!) switch, to force my Mac to use ICMP. Previously, line 10 was where things ended. Will we get a different result this time? (Spoiler alert: yes!)
Chris-Parkers-MacBook-2:~ chrisparker$ traceroute -I 666.666.666.666 traceroute to 666.666.666.666 (666.666.666.666), 64 hops max, 72 byte packets 1 10.8.159.254 (10.8.159.254) 2.273 ms 1.233 ms 3.960 ms 2 126.96.36.199 (188.8.131.52) 1.570 ms 1.516 ms 1.315 ms 3 184.108.40.206 (220.127.116.11) 2.111 ms 1.938 ms 1.911 ms 4 18.104.22.168 (22.214.171.124) 3.765 ms 3.903 ms 3.689 ms 5 126.96.36.199 (188.8.131.52) 3.616 ms 3.493 ms 3.319 ms 6 ge-2-1-0.mpr1.lhr2.uk.above.net (184.108.40.206) 3.565 ms 3.470 ms 7.246 ms 7 ae13.mpr3.lhr3.uk.zip.zayo.com (220.127.116.11) 3.865 ms 3.724 ms 3.667 ms 8 4444.4444.4444.uk.zip.zayo.com (444.444.444.444) 4.348 ms 4.440 ms 4.219 ms 9 555.555.555.555.ipyx-xxxxxx-zyo.above.net (555.555.555.555) 4.818 ms 5.503 ms 4.368 ms 10 666.666.666.666 (666.666.666.666) 6.876 ms 5.495 ms 7.177 ms Chris-Parkers-MacBook-2:~ chrisparker$
The traceroute actually completed! Our box accepted ICMP traffic, and as such it replied.
The problem our customer mentioned was that they thought there was a routing problem, because “sometimes the last hop responded, and sometimes it didn’t”, so they said. Turns out that it “wasn’t working” from a Linux box, but it was from a Windows box.
Ultimately they could always ping the end destination, so I was able to reassure them that everything was working as expected. But when the ping didn’t convince them, showing them these two traceroutes did the trick. A bit of extra knowledge about the inner-workings of the protocol helped me to reassure them just a little bit more. I hope you find this info helpful too!
While we’re here, let’s end with another bit of useful traceroute info. Heed these words: slow response times from a device don’t necessarily indicate slow speeds on the line. It could very likely indicate that the control plane of the box is doing a lot of hard work, and so isn’t prioritising the generation of TTL exceeded messages. Transit traffic (ie traffic passing through the router, in the data plane) could be absolutely fine! If one single hop has high response times, but all the hops after it have speeds that you’d expect, then this is almost definitely the case.
Be careful with traceroute: it doesn’t always tell you what you think it’s telling you. If you want to know more, there’s an hour-long video on YouTube by an engineer called Richard Steenbergen, who’ll tell you all about it. It’s well worth your time if you want to know more!
Hey there: thanks for reading this! If you enjoyed it and want to find out when I make new posts, follow me on Twitter. And if you enjoyed this post, of course I’d love you to share it on your favourite social media of choice. Go on: be the hero you wish existed in the world!
And if you fancy some more learning, take a look through my other posts. I’ve got plenty of cool new networking knowledge for you on this website, especially covering Juniper tech and service provider goodness.
It’s all free for you, although I’ll never say no to a donation. This website is 100% a non-profit endeavour, in fact it costs me money to run. I don’t mind that one bit, but it would be cool if I could break even on the web hosting, and the licenses I buy to bring you this sweet sweet content. | <urn:uuid:3c43c21d-16c9-4db4-a0fa-820555110aee> | CC-MAIN-2022-40 | https://www.networkfuntimes.com/a-qurik-of-traceroute-that-youll-want-to-know-about/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00101.warc.gz | en | 0.898879 | 2,644 | 2.765625 | 3 |
Please visit the COVID-19 response page for resources and advice on managing through the crisis today and beyond.
Did you mean...
Or try searching another term.
Human-centered AI learns from human input and collaboration, focusing on algorithms that exist among a larger, human-based system. Human-centered AI is defined by systems that are continuously improving because of human input while providing an effective experience between human and robot. By developing machine intelligence with a goal of understanding human language, emotion and behavior, human-centered AI pushes the boundaries of previously limited artificial intelligence solutions to bridge the gap between machine and human being.
From a business standpoint, human-centered AI solutions leverage human science and qualitatively thick data to understand the deeper needs, aspirations and drivers that underlie customer behaviors in your market. Advanced contextual analytics combine data and human science to deliver specific behavioral information. When analytics are applied to human behaviors and choices, patterns appear. These contextual analytics combine data and human science to produce dramatically improved, personalized customer experiences. Clear, informed business strategies can be developed when companies know exactly what their customers do and expect.
The business benefits of human-centered AI include:
For more information about human-centered AI, see our additional articles below. | <urn:uuid:26b2eba4-eb08-403c-94c9-c652a2efabde> | CC-MAIN-2022-40 | https://www.cognizant.com/us/en/glossary/human-centered-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00101.warc.gz | en | 0.918235 | 262 | 2.765625 | 3 |
Protecting K-12 Student Data and Complying With Privacy Standards Requires a Comprehensive Security SolutionDownload Case Study
Our education systems are under siege
The education systems, including K-12 school institutions, are in the crosshairs of increasingly frequent and sophisticated cyberattacks. In just one month of 2021, educational organizations suffered more than 5.8 million malware incidents.
Teachers, administrators and students are also targets as they use various devices such as laptops and smartphones to browse social media or send messages with friends and family.
David Richardson, Lookout Vice President of Product, spoke with ABC7 Bay Area about device security and what parents need to be aware of as students go back to school.
So what makes education institutions so appealing to cybercriminals?
Students are high value targets
Educational institutions possess large volumes of sensitive student data. According to the K-12 Cybersecurity Resource Center, school districts across the U.S. handle data for more than 50 million individuals. Cyber criminals place an especially high value on personally identifiable information (PII) of students, particularly their social security numbers, as they are less likely to be caught when impersonating a child.
Education systems have become more vulnerable
Protecting data of K-12 students used to be far easier. In the past, students learned exclusively in the classroom with hardcopy materials and relied on the school or local library for the learning material they needed to do their homework. Data almost exclusively resided “on premises,” as in it was stored in the school building or the school system’s data center.
The transformation to a digital education environment, fueled and accelerated by the COVID-19 pandemic and the imperative for remote learning, changed everything. Today, K-12 students rely on school-issued Chromebooks and personal mobile devices to access cloud-based educational applications that deliver customized learning, social media apps to collaborate with teachers and other students, and the internet to reach web sites to complete their homework.
Digital learning not only expanded the educational landscape, but it also moved PII into new environments, making it more vulnerable than ever to compromise. In fact, ransomware attacks hit nearly 1,700 U.S. educational institutions in 2020, a 100% increase from the prior year. The average attack cost $2.73 million in downtime, repairs and lost opportunities. And as a result, it’s a lot easier to lose sensitive data like student PII, which could trigger a violation of the Family Educational Rights and Privacy Act (FERPA), which comes with its own set of stiff penalties and consequences.
Outdated, inadequate security solutions
Most security solutions currently used by school districts were designed to protect on-premises data and apps. They are ill-equipped to account for apps that reside in the cloud or student data that lives and travels on mobile devices, hotspots and throughout the internet.
To ensure sensitive data is protected, schools must rethink their security strategy. Simply deploying modern security products for one-off use cases isn’t enough. Some systems focus on implementing a secure web gateway (SWG) to support secure access to the internet. That’s necessary, but not sufficient, as it leaves out other parts of student activity, such as mobile devices and various cloud apps.
Protecting students and maintaining compliance requires a unified approach
K-12 school systems must eliminate the patchwork of solutions that fail to meet the depth and breadth of security requirements that accompany digital education. Lookout offers a unified, industry-leading platform that:
- Enables controlled access for students, teachers, families and administrators to all of the on-premises and cloud-based apps they need.
- Protects all of the devices — including Chromebooks, smartphones and tablets — students use to learn, collaborate and do their homework.
- Provides SWG functionality for safe student web browsing by blocking access to sites deemed inappropriate or outside of acceptable use.
- Counteracts cybercriminal activity by protecting data wherever it goes and is stored, continuously assessing risk and dynamically blocking access to locations where viruses and malware may be present.
The Lookout Security Platform delivers these capabilities without impacting the student learning experience or invading the privacy of the apps and data on their personal mobile devices. Our platform represents a comprehensive solution that best meets the enhanced data protection, privacy preservation and FERPA compliance mandates facing K-12 school districts in today’s digital education world.
Learn more about how Lookout secures student data: https://www.lookout.com/solutions/k12edu | <urn:uuid:ab900921-e72c-45d3-97f4-2970b66b436e> | CC-MAIN-2022-40 | https://www.lookout.com/blog/protecting-k-12-student-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00101.warc.gz | en | 0.938568 | 946 | 2.625 | 3 |
Networking has its own language. On top of that, there are a lot of acronyms. If you can't speak and understand the language, you won't get anywhere.
Networking used to be an exotic technical area that only specialists saw. But today most everyone is on the Internet. Learn the names, and things will get easier.
The OSI Model
Yes, this is an academic model that always appears when networking comes up. But it's useful!
Here, in written form, is how I explain in about 15 minutes all you need to know about networking for an introductory cybersecurity course. The OSI model, or at least the important parts of it, helps people to quickly understand the different tasks that networking must perform simultaneously.
|Layer||Device making decisions at this layer|
Jobs software programs do
|ALG, AV, Spam filter, DLP, WAF, etc|
UDP: Messages to numbered ports
TCP: Connections to numbered ports
Relay packets hop by hop to anywhere by IP address: [netid|hostid]
Send frames to HW/MAC addresses
Send and receive 0 vs 1 bits
|Repeater (link) or hub (star)|
Understanding the Protocols
However, those documents include far more than you probably want to know. See my quick overview if you just want a reminder of the headers.
Get reference texts, but save money by buying older editions. You need to understand IP routing, ICMP rules, TCP handshaking, and so on, and those things haven't changed for decades. Here are some of the books on my shelf:
Internetworking with TCP/IP, Volume 1, Douglas Comer, Prentice Hall. This is a very readable description of the major components (and many of the minor ones) of the TCP/IP internetworking protocol suite. Comer's book is the best place to start.
TCP/IP Illustrated, Volume 1, W. Richard Stevens, Addison-Wesley. A bit tough for an introduction, but a good one to follow Comer's book with lots more details. Comer's book is readable, this is more like an encyclopedia.
Managing IP Networks with Cisco Routers, Scott M. Ballew, O'Reilly and Associates. Good advice on IP routing with Cisco.
Interconnections: Bridges and Routers, Radia Perlman, Addison-Wesley. Loads of details on routing algorithms and protocols.
These organizations design protocols, identify standards, and define and dissemenate The Truth:
Network Monitors, or Packet SniffingNetwork
The Wireshark software package can capture and display network traffic.
You might refer to this as "network monitoring", or "packet capture", or "protocol analysis". You might be troubleshooting, or you might be stealing passwords or sensitive data. Protocol analyzers are dangerously powerful tools!
Operating System Details
Every operating system has its own command-line interface to check and set network parameters. Linux, Windows, macOS, Cisco, they all do it their own way. Learn the command-line networking tools.
Physical / Data Link Layers
Many marine vessels now have satellite and a fully functioning network online. Many boat factories that build Sportfishing Boats for sale install their own network on board and are able to control all of the ship's functions right from a cell phone or tablet.
Cisco Router Simulators
Modern switched networks are built in a multi-tier architecture. It may be as simple as spine switches at the core and leaf switches for the host connections.
A three-tier architecture uses core, distribution (or aggregation), and access switches. The core switches at, well, the core of your network, distribution switches in data centers, and access switches for host connections.
A top-of-rack or TOR model has an access switch in each rack. Not necessarily at the top! All the servers in that rack connect to the TOR switch. It then connects to a distribution switch for a row of racks, which then connects to a core switch. If the inter-switch connections are fibre, the architecture is somewhat "future-proofed" or "upgrade-proofed" — if you upgrade the TOR access switches, it's a simple replacement.
An end-of-rack or EOR model connects all the servers in all the racks in that row directly to a distribution switch at the end of the row. The advantage is that there is one less switch in the end-to-end connection, and a little less latency. The disadvantage is that the cabling is much more difficult to manage.
Ethernet 5-4-3 rule (the IEEE way)
The rule was needed in the days of 10BASE5 and 10BASE2 bus topologies built from coaxial cable, as the Ethernet standard required that a signal reach every part of the network within a specified time:
- There can only be a maximum of five LAN segments,
- connected via four repeaters,
- and only three may have user connections.
Modern switched Ethernet LANs are exempt from the 5-4-3 rule because switches have buffers to temporarily store frames and all nodes can access a switched Ethernet LAN simultaneously.
Network Layer — IPIP addresses and subnets
I have a page that aims to be a "just enough" explanation of IP addresses, netmasks, and subnets.CIDR and VLSM
Classless Inter-Domain Routing and Variable-Length Subnet Masks
Another page introduces CIDR and VLSM.
My pages are enough to get you started.
To go deeper into subnet design, VLSM, CIDR, and so on, find and read this 76-page paper by a 3com staff member:Understanding IP Addressing:
Everything You Ever Wanted To Know
VLAN or Virtual LAN technology is one of those things that you don't have to use, but once you see what it provides, you will want to.
IP Address Assignment AuthoritiesIANA
The Internet Assigned Numbers Authority handles global coordination of the DNS root servers and IP address allocation. Then organizations divide up the world by continents.
Oracle Internet Intelligence Global map showing current disruptions and potential disruptions. Also a related blog, white papers, etc.
RIPE Atlas The product of a global network of probes measuring Internet connectivity and reachability.
RIPE NCC RIPE Network Coordination Center. With RIPEstat, fetching information about any IP address/prefix, ASN, country code, or hostname.
Linux, IPv6, and Cable ModemsLinux, IPv6, and Arris Surfboard cable modems
Major ISPs support IPv6, However, I found that the Arris Surfboard cable modem didn't support IPv6 until I made some changes to my system.
That cable modem, at least the way it operates on Comcast's network, insists on an unusually small Ethernet maximum frame size. Too small, in fact, for IPv6. There were also some IPv6 routing issues. See my page for the details.
IP Routing LogicIP Routing Logic
Learn how an IP host uses its IP address and netmask along with its routing table to decide how to forward a packet.
The logic is part of the IP protocol — if a device runs IP, this is how it does it.
See my simple explanation of what IPsec is, what cryptographic security it provides, and a little about how to set it up.
NAT or Network Address TranslationHow NAT Works
It makes sense to use a private IP address space inside an organization. RFC 1918 set aside 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 as private IPv4 address blocks, and all of fc00::/7 in the IPv6 address space is set aside for Unique Local Addresses, an analogous concept.
NAT or Network Address Translation is the magic in an edge router that allows internal clients with private or local IP addresses to connect to external servers.
Geolocation and Blocking CountriesGeolocate IP
You can use this IP geolocation API to receive highly accurate location data: city, country, longitude/latitude, timezone, and connection type.Block traffic by country Countries as CIDR blocks
This archive of country IP block lists in CIDR format lets you block traffic or email on a country-by-country basis.
The IP2location site has a tool that will build rules to block traffic by country. It supports Cisco ACLs, Linux iptables, Nginx, Apache .htaccess, and more.
DNS LOC is about a DNS resource record to describe geographic location. For some now rather old guidance on geolocation investigation (they suggest seeing what time zone the TELNET service announces!), see IP2geo and cities.lk.net.
See the NSA's US Patent 6,947,978, "Method for Geolocating Logical Network Addresses". It builds a network latency topology map using latency to and between known nodes.
Multicast and AnycastAssigned multicast addresses and address blocks
Multicast routes packets to all members of a group. All participating hosts receive the data, but only one copy of each packet has to traverse the network. RFC 1112 describes how to do multicast.
Anycast, on the other hand, delivers a packet to any single member of the group, you don't care which one. It is used now for things like root and top-level DNS service, and it can be used within an organization for services like DNS and LDAP. Anycast is described in RFC 1546 and RFC 4786.
The sipcalc tool provides command-line IP subnet calculations. It's available as Linux and BSD packages.
$ sipcalc 220.127.116.11/21 -[ipv4 : 18.104.22.168/21] - 0 [CIDR] Host address - 22.214.171.124 Host address (decimal) - 1659015237 Host address (hex) - 62E29045 Network address - 126.96.36.199 Network mask - 255.255.248.0 Network mask (bits) - 21 Network mask (hex) - FFFFF800 Broadcast address - 188.8.131.52 Cisco wildcard - 0.0.7.255 Addresses in network - 2048 Network range - 184.108.40.206 - 220.127.116.11 Usable range - 18.104.22.168 - 22.214.171.124 - $ sipcalc 2001:558:600d:16:9937:9580:ac52:27f5/64 -[ipv6 : 2001:558:600d:16:9937:9580:ac52:27f5/64] - 0 [IPV6 INFO] Expanded Address - 2001:0558:600d:0016:9937:9580:ac52:27f5 Compressed address - 2001:558:600d:16:9937:9580:ac52:27f5 Subnet prefix (masked) - 2001:558:600d:16:0:0:0:0/64 Address ID (masked) - 0:0:0:0:9937:9580:ac52:27f5/64 Prefix address - ffff:ffff:ffff:ffff:0:0:0:0 Prefix length - 64 Address type - Aggregatable Global Unicast Addresses Network range - 2001:0558:600d:0016:0000:0000:0000:0000 - 2001:0558:600d:0016:ffff:ffff:ffff:ffff -
DNS and BIND
DNS is the crucial component that makes the Internet useful for humans. It lets us use names that make sense to us: www.chem.purdue.edu is probably a web server, within the Department of Chemistry, at Purdue, which is a University. But unless you're familiar with that university's networks, the IP address 126.96.36.199 wouldn't mean anything to you.
Most organizations use the BIND software package to provide DNS service. You can get BIND at isc.org.
The standard introductory RFCs to read are RFC 1034 and RFC 1035. for the truth about DNS. Note that these links to RFCs about DNS take you to the info pages, where you see links to updates and more recent related documents. Also see:
- RFC 1032 and RFC 1033, the Domain Adminstrator's Guides
- RFC 1535 for security issues
- RFC 1536 for implementation problems
- RFC 1912 for common configuration problems
- RFC 1591, RFC 3071, RFC 2181, and RFC 2182 for DNS structure and delegation
- RFC 4033 for DNS security issues
Transport Layer — TCP and UDPTCP
IANA maintains the list of all the assigned
TCP/UDP port numbers.
/etc/services on everything except
Windows, and something like
C:\Win*\Sys*\drivers\etc\services on Windows,
contains a partial list.
For the complete answer, see
output Examples of
netstat command provides
loads of information on a machine's network communications.
Listening TCP ports, currently active sockets, etc.
It's available under Linux, Unix, Apple OS X,
and Windows, but the precise format
of the output varies between operating systems.
SSL / TLS
We don't really use SSL any more, it should be nothing but TLS or Transport Layer Security, but we're all in the habit of saying "SSL". Learn how it works, and how to use it correctly and safely.
How Browsers Use TLS SSL/TLS Security Issues Running TLS 1.3 with Nginx and OpenSSL Google Cloud, FreeBSD, and TLS Using Free "Let's Encrypt" TLS Digital Certificates on GoDaddy Hosting
Nginx and Apache HTTP/HTTPS Web Servers
Visualize Nginx and Apache logs in color
SDN or Software-Defined Networking allows hosts to request data flows with specific quality of service, latency, throughput, security, and other parameters.
The OpenFlow project develops open-source infrastructure. Major industry players have their own versions, including Cisco's ACI, VMware's vSphere, and Microsoft's Hyper-V.
Software-Defined Networking (SDN)
Odds & Ends
I was working on this networking project in Japan, and...
See the Internet
Telecommunications Infrastructure in Manhattan
Client IP / OS / Browser Identification
A demonstration of how a PHP script on the server
can read and reformat
the connection information and the client's request:
Including Jessica Simpson's thoughts on open-source routers,
Gillian Anderson discussing LAN switching,
Elizabeth Hurley on the Cisco 2600 series routers,
Mr Rogers on the RS-232 standard,
and other really odd stuff:
History of the Internet
History of the Internet
The Internet Society The History of the Internet
Broadband Suppliers RFC 2235
Just What Is A "Daemon", Anyway?
According to the Oxford English Dictionary, it is "an attendant, ministering, or indwelling spirit." Socrates wrote of his daemon as his inner spirit. The designers of daemons in Linux/Unix (a concept later ported to most other operating systems) intended this meaning, as pointed out in some manual pages. It's an uncommon word these days, we usually use the Arabic djinn, these days often spelled genie, when we're talking about what used to be called a daemon in the Middle Ages. | <urn:uuid:7966c0e5-b4cc-46c4-bd01-720fbf0a52e1> | CC-MAIN-2022-40 | https://cromwell-intl.com/networking/Index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00101.warc.gz | en | 0.874637 | 3,467 | 3.28125 | 3 |
Manufacturing emerged as the most targeted sector when it came to cyberattacks through 2021, while ransomware and vulnerability exploitations combined to “imprison” businesses, heavily burdening global supply chains.
The assessments surface from IBM’s X-Force Threat Intelligence Index, which identified phishing as the most common cause of cyberattacks through the last year.
Also observed was a 33% rise in attacks caused by vulnerability exploitation of unpatched software, a point of entry that ransomware actors relied on more than any other to carry out their attacks in 2021, representing the cause of 44% of ransomware attacks.
The 2022 report details how in 2021 ransomware actors attempted to “fracture” the backbone of global supply chains with attacks on manufacturing, which became 2021’s most attacked industry (23%), dethroning financial services and insurance after a long reign.
Experiencing more ransomware attacks than any other industry, attackers wagered on the ripple effect that disruption on manufacturing organisations would cause their downstream supply chains to pressure them into paying the ransom.
Alarmingly, almost half (47%) of attacks on manufacturing were caused due to vulnerabilities that victim organisations had not yet or could not patch, highlighting the need for organisations to prioritise vulnerability management.
The study mapped new trends and attack patterns observed and analysed from fresh data – drawing from billions of data-points ranging from network and endpoint detection devices, incident response engagements, phishing kit tracking and more.
Ransomware persisted as the top attack method observed in 2021, with ransomware groups showing no sign of stopping, despite the uptick in ransomware takedowns. According to the 2022 report, the average lifespan of a ransomware group before shutting down or rebranding is 17 months.
Experts also noted warning signs of a brewing cyber-storm in the cloud as cybercriminals lay the groundwork to target migrated environments. The study revealed a 146% surge in new Linux ransomware code and a shift to Docker-focused targeting, potentially making it easier for more threat actors to leverage cloud environments for malicious purposes.
The “nine lives” of ransomware groups
Responding to the recent acceleration of ransomware takedowns by law enforcement, ransomware groups may be activating their own disaster recovery plans. Analysis reveals that the average lifespan of a ransomware group before shutting down or rebranding is 17 months. For example, REvil which was responsible for 37% of all ransomware attacks in 2021, persisted for four years through rebranding, suggesting the likelihood it resurfaces again despite its takedown by a multi-government operation in mid 2021.
While law enforcement takedowns can slow down ransomware attackers, they are also burdening them with the expenses required to fund their rebranding or rebuild their infrastructure.
As the playing field changes, it’s important that organisations modernise their infrastructure to place their data in an environment that can help safeguard it – whether that be on-premises or in clouds. This can help businesses manage, control, and protect their workloads, and remove threat actors’ leverage in the event of a compromise by making it harder to access critical data in hybrid cloud environments.
PrivSec Global has long united experts from both privacy and security, providing a forum where professionals from across these fields can listen, learn, and debate.
Attackers target common grounds among clouds
In 2021, more attackers were observed shifting their targeting to containers like Docker – by far the most dominant container runtime engine according to RedHat. Attackers recognise that containers are common grounds amongst organisations so they are doubling down on ways to maximize their ROI with malware that can cross platforms and can be used as a jumping off point to other components of their victims’ infrastructure.
The report also sent out a note of caution with regards threat actors’ continued investment into unique, previously unobserved, Linux malware, with data provided by Intezer revealing a 146% increase in Linux ransomware that has new code.
As attackers remain steady in their pursuit of ways to scale operations through cloud environments, businesses must focus on extending visibility into their hybrid infrastructure. Hybrid cloud environments that are built on interoperability and open standards can help organisations detect blind spots and accelerate and automate security responses.
Also of note, Asia was cited as chief target of cyberattacks globally, experiencing a quarter of all strikes – more than any other world zone through the past year. Financial services and manufacturing organisations together experienced nearly 60% of attacks in Asia.
Phishing was the most common cause of cyberattacks over the prescribed time period. In X-Force Red’s penetration tests, the click rate in its phishing campaigns tripled when combined with phone calls.
Charles Henderson, Head of IBM X-Force, said:
“Cybercriminals usually chase the money. Now with ransomware they are chasing leverage. Businesses should recognise that vulnerabilities are holding them in a deadlock – as ransomware actors use that to their advantage. This is a non-binary challenge.
“The attack surface is only growing larger, so instead of operating under the assumption that every vulnerability in their environment has been patched, businesses should operate under an assumption of compromise, and enhance their vulnerability management with a zero-trust strategy,” Henderson added. | <urn:uuid:928bf127-3c20-4ae0-a245-c6d7fdae8703> | CC-MAIN-2022-40 | https://www.grcworldforums.com/security-breaches-and-attacks/manufacturing-sector-and-supply-chains-feel-cyberattack-heat-through-2021/4239.article | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00101.warc.gz | en | 0.944865 | 1,069 | 2.53125 | 3 |
The energy market has had rapid increases in demand that cannot be fulfilled with carbon-based fuels. It requires companies to use renewable energies such as solar and wind to keep up. What’s needed is an infrastructure at the edge that can expand while maintaining the grid’s integrity, according to Ricky Watts from Intel at the at the Fortinet OT Symposium, which was presented digitally on Aug. 31.
This is a real change of pace for an industry that has, in spite of the constant changes, been very fixed in its ways. “It’s a very rigid architecture and designed to delivery power reliably,” Watts said. “There have been changes, of course, but it’s really the same infrastructure as when it started 100 years ago.”
As the energy industry evolves, that stability and knowledge of what has worked can be beneficial, Watts said. The world and the ways energy is consumed may be changing, but the basic principles are not.
The challenge, however, is the shift from hydrocarbons, which are stable but have a major impact on the environment, to less stable energy sources such as solar and wind power. This is a tall order, particularly as energy demand might quadruple in the next few years thanks to things like the electric vehicle (EV) market, microgrids and data centers.
“The sun and wind are variable in terms of quality,” Watts said. “[We] need to make sure there’s a reliable grid and getting these unstable power sources in there efficiently.”
Five drivers and challenges for the new power grid
While the power grid needs to adapt and change, Watts said this presents a few challenges that need to be overcome. The five biggest are:
- Sustainability to reduce carbon emissions
- Distributed energy at the edge of the grid
- A secure and connected grid
- Dealing with the demand increase from transportation (particularly EVs)
- Appliance sprawl, which is about connecting more of these modern devices to the power grid as they come up.
It goes beyond those aspects, however, particularly as utilities try to get smarter. There’s a lot of data that could be unlocked by using artificial intelligence (AI).
“Almost everyone is aware that AI can access operational data, and people want to know what do we do with this data? How do we use it? The problem is, right now, it’s locked away in these devices,” Watts said.
Operational technology (OT) devices have been siloed off, and while there has been talk of converging OT with information technology (IT), that’s easier said than done. There are security concerns along with making sure the network is up to speed. More than anything, though, the power grid’s reliability has to remain stable throughout.
“You can’t afford to have instability in the power grid,” Watts said.
Utility-driven innovation and virtualization
New ideas and innovation come from different groups working together and sharing ideas with one another. This is another new step for many workers, who have historically been closed off from one another.
“It’s about working with the industry, building a coalition and driving collaboration and innovation,” Watts said.
Applying virtualization technologies for transformation will help provide next-generation solutions that can make devices smarter and more efficient. Virtualization, which might seem like a strange and nebulous concept, has been around for longer than most people think. Watts said its ability to connect and link ideas and technology together has enormous potential for the energy market.
“We’re moving this technology into the utility and grid market,” he said. “We’re leveraging everything that was built in the past and [have] created something very stable for a software-defined world.”
A cybersecure power grid is another concern, which Watts likened to a cat-and-mouse situation. “Things are always adapting and changing,” he said. “Someone is always coming up with something new. You need to be able to adapt as quickly as possible.”
He said virtual infrastructure managers can help with that because they’re designed to virtually patch and secure systems and improve operational efficiency.
More than anything, stability and consistency are critical for the energy market. The world needs and depends on that security. Any technology applied has to follow through on that basic principle. “You need to be able to create something that you can repeat over and over again, regardless of the location,” Watts said.
Even with that caveat, Watts is excited for the future of the energy industry, which, while different from what came before, will still provide the constant hum that powers the world. It just so happens it’ll be smarter, faster and more energy-efficient than what came before.
“In the next two to five years, the transformation of the energy industry is going to continue and accelerate and will end up creating a grid that will be set for the next 100 years.”
Chris Vavra, web content manager, CFE Media and Technology, firstname.lastname@example.org. | <urn:uuid:9a57af62-a446-451c-ba35-0beff65881ca> | CC-MAIN-2022-40 | https://www.industrialcybersecuritypulse.com/facilities/building-a-secure-energy-and-power-grid-for-the-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00101.warc.gz | en | 0.966431 | 1,092 | 2.625 | 3 |
SAN FRANCISCO. Playing video games is typically considered to be a form of recreation. As it turns out, playing games might just also be helping humans to become more resilient in their everyday lives and with IT.
Speaking in a keynote session at the RSA Conference, Game Developer Jane McGonigal told the capacity crowd to embrace gaming. Her reasons are many and they begin with the current state of the American workforce.
“71 percent of U.S. workers are not engaged,” McGonigal said. “What this means is they show up and they just don’t care about the work they are doing, or they don’t feel like they are being challenged.”
According to data from pollster Gallup cited by McGonigal, the disengagement of workers is costing U.S. companies as much as $300 billion a year.
In contrast, American gamers are currently spending a remarkable 7 billion hours a week playing games. She described gaming as being the ‘engagement economy’ where people are fully engage with the task at hand.
“We can take advantage of this pent up desire to engage and use it for real world good,” McGonigal said.
She note that it took 100 million hours to build wikipedia’s content, which is the same as 3 weeks worth of Angry Birds or 7 days of Call of Duty game play by gamers.
“Just imagine what we could do if we put that effort together,” McGonigal said. “Wikipeida might seem harder than playing Angry Birds, but it’s not.”
There are two things that are needed to make the engagement economy work. McGonigal said, we need to have mass participation and users with skills and abilities.
What are gamers good at?
McGongial note that gamers are good at spatial awareness, multi-tasking and at building community and co-operation. Gamers are also good at building up 10 positive emotions that are key to success in life and in business.
The top 10 emotions gamers experience when playing games are: joy, relief, love, surprise, pride, curiosity, excitement, awe and wonder, contentment and creativity.
“Bringing these positive emotions into real problem solving is the secret to gamification,” McGonigal said. “It’s not just about generic motivation, it’s about leveraging these positive emotions.”
She noted that science proves that when humans experience a range of positive emotions over a period time they become more ambitious, able to achieve goals better.
McGonigal pointed out that on average, gamers fail in their games fail 4 out 5 times, by not finishing a level or getting the right score.
“With nothing else in our lives do we accept that level of failure,” McGonigal said. “We don’t allow ourselves to learn from failures and get better.”
She added that, “games make us resilient and things like anxiety and depression don’t get in our way of going after our goals.”
Game Developer Jane McGonigal | <urn:uuid:39505464-a5b5-4f1e-8862-dc2cc1acef6f> | CC-MAIN-2022-40 | https://www.datamation.com/security/rsa-games-make-us-smarter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00302.warc.gz | en | 0.962666 | 669 | 2.796875 | 3 |
A teenage student who has a medical condition which prevents him from physically attending school, has been able to join his classmates in real time, inside and outside the classroom, thanks to an innovative 5G robot connectivity trial in Denmark, by Ericsson.
13-year-old Rasmus Dalsten from Herlev in Denmark now experiences a full day in the presence of his peers via a tablet with a live video connection from a 5G smartphone mounted on a fully mobile robot in the school premises. The extremely low latency, fast 5G speed and high data-handling capabilities of 5G mean that Rasmus, who is unable to physically attend school because of a lung condition, not only doesn’t miss out on classroom or schoolyard activity but has the freedom to move and look where he wants. Rasmus had a previous robot, but that was controlled by teachers and classmates at school - if he wanted to change the view of what he was seeing, someone at school had to physically turn the robot in that direction.
Real-time eyes and ears
With 5G, Rasmus has full control of the robot, called Fable, which is about the size of a small teddy bear. Wherever he wants to go, and whoever he wants to make contact with at school, he controls Fable to become his real-time eyes and ears. He can react to classroom lessons as they happen by steering Fable himself. If someone behind Fable makes a comment as part of a discussion, Rasmus can manoeuvre Fable to see who is speaking, and join in.
“I’ve got a new robot that I can turn around myself and it’s no trouble for the adults or myself. It is very nice,” says Rasmus. Rasmus’s mother, Charlotte Dalsten, is impressed that Fable, which Rasmus has trialled for six months, allows him to make instant eye contact with his classmates. “Rasmus logs onto the PC and the iPad, which is connected to the robot,” explains Charlotte. “This way he gets to be with his school class all day, during lunch breaks and in class. It has been important for me to see that Rasmus is happy when he logs on, that the others interact with him and that he is able to look around and make eye contact with them,” she continues.
Impressively, Rasmus’s teacher, Cille, says the 5G-powered robot enables Rasmus to take part in lessons on a par with other classmates who are physically in the room. “It is not me who decides if he looks at the blackboard or if he looks at those in the classroom who are speaking,” she says. “He decides where to look and where to go. That’s what he would do if he was here physically. So that has clearly been the biggest benefit.”
The 5G trial emerged from collaboration between Danish communications service provider, TDC Net, Danish robotics company Shape Robotics and Ericsson at the TDC/Ericsson 5G Innovation Hub in Denmark. The companies say the trial has proved the potential of 5G-enabled mobile robotic connectivity use across a host of use cases spanning domestic, workplace and industrial applications.
“Thanks to the collaboration with TDC NET and Ericsson, we have been able to show the potential in combining 5G technology with robots in an educational setting. Although Fable also works with 4G, in practice, it is a different robot when using 5G. With 5G, students experience lightning-fast sound and image, so they can participate on the same terms as other students,” said Moises Pacheco, CTO and Co-Founder, Shape Robotics.
Toke Binzer, Vice President, Technology, Strategy and Economics, TDC NET, agreed, saying: “It is roughly a year since we initiated the launch of the first nationwide 5G network in Denmark. Since then, we have seen increased interest in using the technology to deliver value within many different sectors. Robots connected to the 5G-network can, unlike other robots, both send and receive large amounts of data without delays, while at the same time being able to be controlled remotely. And if we take a step back from the educational sector it becomes clear that the learnings from this project can be transferred to other areas such healthcare, social care and working remotely across industries.” | <urn:uuid:f421a4cf-3c1e-4777-b8d9-4b9938bc8e01> | CC-MAIN-2022-40 | https://www.5gradar.com/news/ericsson-enabled-5g-robot-bridges-school-distance-learning-barriers-in-denmark | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00302.warc.gz | en | 0.976273 | 926 | 3.0625 | 3 |
Artificial Intelligence: Out of the futurists’ lab, into the real world of networks and cybersecurity
Artificial Intelligence to the rescue! AI is widely seen by scientists, vendors and even enterprise IT professionals as the next step in cyber defense. It's a perfect match. Cyberattacks are coming faster than humans can respond – and are morphing into new dangers that traditional anti-malware software can't always detect. That's where AI's self-learning capabilities excel, and are able to respond at the speed of light (or at least, at the rate of high-speed networks and modern microprocessors) in an adaptive, effective fashion.
We recently spoke to three cybersecurity experts to get their take on the state of AI for next-generation network defense – and their views about the future: Kathryn Hume, President, Fast Forward Labs; Stuart McClure, Author, Inventor, and CEO, Cylance; and Paul Jackson, Principal Analyst, Digital Media, Ovum.
Kathryn Hume explained that artificial intelligence algorithms always start with particular use cases and particular data sets “from which we elicit general algorithms that then may or may not be able to be applied to different use cases but both the opportunity and the complexity of this space lies within that transition from particular to general.”
For example, she cites the well-known IBM Watson computer that won on the game show Jeopardy by focusing on a specific data set; “we've seen Google DeepMind build AlphaGo which is a tool using a technique called reinforcement learning, a set of artificial intelligence algorithms that put in position a system of rewards to train systems to excel at a particular task.” In that case, AlphaGo developed and used a set of algorithms to beat Lee Sedol, the leading Go champion, in March 2016.
From Fun and Games to Data and Networks
Beating humans at trivia and at board games is one thing; it's great for building awareness of AI and of exciting the popular press, but it doesn't apply directly to enterprise computing. Neither do other applications of AI that we encounter as consumers, such as Facebook image recognition or textual analysis of Twitter posts to learn about users' political preferences. How about protecting computer networks from attackers and malware? It's all the same problem, said Ms. Hume: Studying huge amounts of training data to learn patterns – and then rapidly searching for those same patterns in real-world situations.
Cylance's Stuart McClure picked up the narrative, explaining that for software that monitors the data stream – such as network traffic or email attachments – the goal is to quickly determine if the traffic or file is safe or malicious. That requires a lot of deep learning to see patterns – and the ability to evaluate new data quickly, to see if it meets the patterns of safe or malicious.
Mr. McClure used the analogy of watching a house to determine if a person walking nearby is a burglar. “Some cybersecurity platforms cannot determine if something is bad unless they've seen it before and know what it will do. It's sort of saying, well I can't tell if this person walking up to my house is going to burglarize it until they actually break in and steal something. Right? That's not very effective.
By contrast, there's a better, more effective way, said Mr. McClure, which is to learn patterns – and not trust anything suspicious. “If you were to watch the video cameras from every home around the world, for every burglar that ever walked up to a house and burglarized it you'd create patterns in your mind. You would create connections between how they were dressed, how they approached the house, how they interfaced with the locks. You would figure it out pretty quickly if they were good or bad. So when a new person came up to your house you'd think, uh this person matches the pattern for a burglar. I'm not going to trust them. I'm going to watch them the whole time.
The Cybersecurity AI Breakthrough
Mr. McClure applied that reasoning to cybersecurity where, in the old, pre-AI model, an anti-malware company needed thousands of analysts to write rules and virus signatures, by studying malware that evaded previous rules and signatures. “That's not scalable,” he said, and can't reach the 99% success threshold needed for effective protection. “We can't possibly scale like that with thousands of analysts writing rules and signatures. The threats come out way too fast.
That's when Mr. McClure, through Cylance, had the breakthrough: Instead of studying the latest malware to write new rules and signatures – and therefore, detect it only after it successfully attacked someone – why not use artificial intelligence?
“That's what we've been able to do,” said Mr. McClure. “We talk about two parts of AI quite a bit - supervised and unsupervised learning. There are two parts to what we do. The first part is we automatically look for features that are going to be potentially indicative of good or bad.” That's not just a few features, by the way - not even just a hundred features. “Now if I told you we have over five million features that are indicatively defined as malicious or safe you probably wouldn't believe me. Right? Five million? That's insane.
The first part is to use software to look for features that might indicate malicious intent in a file. The second part? A supervised human judgment of whether sample files are actually malicious or not. “We collect as many files as humanly possible. Then we extract as many features as we possibly can that we've already mapped or learned are potentially useful. Then we transform those. We then train the AI using neural networks about what is going to cluster to good and what is going to cluster to bad. Then we classify it. If it's bad we block it. If it's good we allow it. It's that simple.
Ovum's Paul Jackson observed that while AI has been around for decades, both in the lab and in commercial products, there have been many rapid advancements recently. “To a lot of us, practical AI seems to have really come to the forefront over the last 12 or 15 months, butwhy now?
Fast Forward's Ms. Hume agreed with that point: many techniques such as neural networks and deep learning have been around since the 1990s, and in some cases AI goes back to the 1940s. But there were some problems, she said, and some tools that didn't yet exist. “There wasn't a lot of data to work with. We didn't have the big data area - I use the term big data to refer to storing and processing data, not doing stuff with it. So 10 years ago it became really cheap to store a lot of data, keep it up in the cloud and then do stuff with it.
Indeed, when it came to practical pattern recognition, she continued, “Around 2011 was when Google had a first coup using artificial neural networks to automatically identify cats in videos across the Internet. Computers needed to figure out that there was something about cats that made them similar, and could cluster together all these patterns. Then the supervised part was humans coming in and saying, oh yeah that thing you see that looks kind of like a blob of something, this amoeba thing, that's called a cat. And that one isn't a cat.
The Rise of the GPU and Big Data
Another factor, Ms. Hume said: the rise of graphical processing unit (GPU) chips that excelled at pattern recognition processing. “Some kid playing video games realized that the structure of GPUs to process images were pretty good at matrix multiplication, which just so happens to be the type of math that's powering these deep learning algorithms. So they said, the gaming industry is huge but gosh this other thing might be a lot bigger if we can actually apply these things to enterprise artificial intelligence needs, and this lets us train those neural networks faster.
“Another factor in AI's rapid rise is the data,” added Ms. Hume. “It takes a neural network probably 50,000 examples in order to gain that ability to recognize things. So you can imagine if we're going to go through all of the types of objects we might want to identify to build a recognition system we need a lot of training examples. So that data has also propelled the transition.
Cylance's Mr. McClure cited a fourth breakthrough technology: Cloud computing. “We never could have started this company and done what we've done without the cloud, without Amazon Web Services in particular. Two or three years ago, it would literally take about six months to build a malware detection model. Today our models take about a day and a half to build. But we have to spin up over 10,000 CPUs to do that in a day and a half. Without that flexible compute fabric there's no way we could be doing what we're doing. It's just that simple.
The Perfect Place to Apply Artificial Intelligence
Ovum's Mr. Jackson observed that, “We are increasingly facing many more sophisticated types of attack, and that end point protection is a key goal of cybersecurity systems. This type of security seems to be one of those areas where AI is particularly well suited, because trained tools can perform far better than people.
Cylance's Mr. McClure agreed that cybersecurity is the perfect place to apply AI and machine learning. “Quite honestly I don't know why it hasn't been done before! That seems pretty easy, straightforward. That would be a natural assumption to apply.
He continued by citing three core ways that attackers manage to penetrate systems, all of which can be blocked or mitigated through the use of AI:
“First: Denial of Service, which starves the resources of the target. So you starve memory,you starve network bandwidth, you starve a CPU or a disc or something and the system falls down. It breaks.
“Second: Execution based attacks, which is what Cylance protects against. An attacker gets something sent to you or gets you to click on something that executes something in memory to do malicious things on your computer.
“Third: Authentication based attacks. Being able to steal your password and pretend to be you on your computer when you're not there, or bypassing authentication or brute forcing your password or any of those things.
“AI can be applied to all three of those areas in a very meaningful way, you just need the data.
How about the Rise of the Machines?
Mr. Jackson looked into the future, and was playfully concerned about what he might see. “We have talked about unsupervised and supervised learning. There is a whole realm of fear around wholly unsupervised AI, a sort of ghost in the machine, like the Terminator's Skynet. The growth of AI is discussed a lot in the press - are those worries unfounded? Realistic? Is dangerous AI something we have to keep an eye on?
Fast Forward's Ms. Hume was not completely reassuring. “The thing to be concerned about in the near term is supervised learning, not unsupervised learning. That's not because computers are dangerous but because people are dangerous. Why? There are all sorts of things that we do as people in society. We leave traces of that in our data.
And, she continued, supervised learning requires human input and that input may not always be benign, or particularly thoughtful “We train systems based upon the decisions that humans have made in the past. So let's take an example of using algorithms to try to automatically hire somebody into your company or recruit students to your school or even give a loan for a credit application. If we try to automate that, the systems aren't that smart. They go out and they look in data sets. If in the past a specific university tended to recruit a certain type of candidate, the system will make future decisions based on that data. If the university tended to recruit relatively wealthy white males, the AI will build a model based on those past decisions.
That can lead to perpetuating those decisions – without any specific intent to do so, Ms. Hume continued. “We go into the system and we say here is a model for the type of candidate we're looking for. These are the decisions that humans have made in the past. The algorithm will then learn to find candidates that look like those, basing its decisions upon what the humans did. The result? The AI algorithm comes back and says, ‘here is a pool of 95 per cent rich white males that we suggest you recruit to your school, precisely because if we think about a normal distribution this is where the bulk of the features tend to lie.' “
Ms. Hume concluded, “If we relegate our decisions to the algorithms they tend to propagate and amplify the stupid decisions we as humans have made. It's not about systems being stupid or intelligent, it's about our mixing together the corporate values with social values. We as data scientists may take an ethical position with regards to potentially having to hack the AI-learned algorithm so that we can create the future that we want, instead of one that perpetuates our biases from the past.
Look Out, Ransomware, Here Comes AI
Cylance's Mr. McClure closed the conversation with an example of using AI algorithms to classify and defend against one of this year's biggest challenges: Ransomware. It's a numbers game, he said – the more effective AI is in blocking ransomware, the less attractive sorts of attacks will be.
“We are seeing effective defenses against ransomware today,” he said. “With the AI technology that we have installed on over three million end points, we already have the ability to have all of that technology truly detect malware and get to the ninety-ninth percentile of protection, and that includes about 350,000 to 400,000 new attack variants that come out every day.
As advanced AI-based malware detection tools deepen their market penetration, Mr. McClure added, cybercriminals will see that “all their new fancy attacks are no longer bypassing the security systems they are targeting. They are now getting caught. They're getting prevented. So there will be a natural desperation motivating the attacker to proliferate even more attacks.
Unfortunately for the attacker, that won't work, said Mr. McClure. “When attackers realize that doesn't work, they will get more sophisticated and spend a lot of money on trying to bypass the AI. I don't mind them bypassing us - I would actually love it because every single attempt to bypass helps us to make the AI model smarter. | <urn:uuid:696a1d9d-1a60-4189-8518-41616b86bccb> | CC-MAIN-2022-40 | https://securitybrief.asia/story/artificial-intelligence-out-futurists-lab-real-world-networks-and-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00302.warc.gz | en | 0.966427 | 3,053 | 2.75 | 3 |
Open-source intelligence (OSNIT) is the insight gained from processing and analyzing public data sources such as broadcast TV and radio, social media, and websites. These sources provide data in text, video, image, and audio formats.
Why open-source intelligence (OSINT)?
Demand for fast, 360° intelligence from disparate data sources in addressing public safety and opinions is increasing. OSINT provides accelerated analysis government agencies can enable effective actions, or improve competitiveness, where commercial organizations can monitor and analyze data related to market trends, their brands, and those of their competitors.
How does open-source intelligence (OSINT) work?
OSINT is achieved by ingestion of video, image, audio, and text data from public domain data sources and analyze the ingested data to yield insights from across all data sources. The analysis is based upon machine learning and deep neural network algorithms which enable the system to learn from the data to achieve and refine recognitions of patterns, trends, and relationships. For example, in the case of a broadcast TV interview, OSINT intelligence can identify both the interviewer and interviewee (video analytics), the key topics of discussion (speech and text analytics), how viewers react in social media (text analytics), and automatically provides, say, viewers’ opinion clusters, trends, and sentiments. | <urn:uuid:02ab58af-b4b3-4116-82b5-a15d8d9eb265> | CC-MAIN-2022-40 | https://www.microfocus.com/en-us/what-is/open-source-intelligence-osint | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00502.warc.gz | en | 0.906527 | 266 | 2.953125 | 3 |
Artificial intelligence (AI) has gone from being a term of science fiction to an exciting new business tool. The possibilities the technology has to offer appear to be limitless even as we witness it solve a myriad of problems from streamlining processes to improving customer service. But, apart from its obvious boons, there is a persistent vigilance that many industry visionaries, such as Elon Musk, have warned with AI. Of course, as it is with every other technology, even AI poses risks that every business leader should know:
Data is the gateway to improving business operations, customer experience, enabling new innovations, and generally growing as an organization. However, all this requires the collection and collation of vast quantities of data. This poses an essential risk to the data as it risks the privacy and security of thousands, if not millions, of individuals. As the power of AI increases, so does the vulnerability of data due to breaches and privacy violations.
As companies keep accumulating data to fuel the AI systems it becomes a formidable task to keep this data secure. As technology gets smarter so do hackers who look for vulnerabilities to exploit and use this data for illegal uses. If security precautions are insufficient the data that can benefit AI can also similarly defraud people of their digital identities and assets.
AI can be and has been used to sway public sentiments by manipulating or misusing data. This can lead to situations that may compromise national interest. It can be used to spread propaganda and false news to individuals identified by algorithms and personal data. This information can be fact or fictional in various formats over a period of time and thereby manipulate certain outcomes - like appointments or elections.
As AI systems store data on individuals it is being used to make decisions that may or may not be always beneficial to consumers. Medical data available to insurance companies might result in rejecting insurance offers. Similarly, the concept of a social credit score is being used for employment purposes which can adversely affect employment opportunities.
As AI is increasingly used in banking, it has led to decision making solely by data analyzed by machines that lack human empathy and transparency. This leads to issues where banks have to explain regulators certain decisions that were solely based on AI decisions. Also, algorithms that are used in stock trading sometimes are unable to correctly adapt to new circumstances leading to sudden financial losses.
Lastly, a hypothetical future scenario where AI may reach levels of super intelligence leading it beyond humanity’s control. This could pose numerous negative consequences some of which we’ve seen in science fiction. But the core existential threat remains a theoretical reality as it has been discussed by leading innovators like Elon Musk. | <urn:uuid:96530791-6eb9-43f0-a688-49c5db6b8c9c> | CC-MAIN-2022-40 | https://straighttalk.hcltech.com/listicles/6-risks-of-artificial-intelligence | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00502.warc.gz | en | 0.948523 | 550 | 3.1875 | 3 |
With the rapid growth in data traffic in the past decade and the rapid expansion of computer networks, the necessity for network security and stability has grown massively. Due to these concerns, leading service providers and big enterprises have focused mostly on security, stability, and quality of service have been offering. However, even with service providers being concentrated on these challenges, they are still experiencing various problems on a regular basis. Most of the problems have been categorized as human error and unforeseen network outages including device failures and misconfigurations. These outages have made a big financial mark on them and the IT world, as well as on the big companies depending on them. In the past years, we witnessed many big outages, some of them caused by only a single misconfigured firewall policy, other caused by device failure, or software bugs. A recent software bug at a top UKs service providers was caused because a single core Juniper switch had a memory leak, causing the loss of millions of pounds in a single night. The biggest financial impact was experienced by bookmaking companies which couldn’t stream live sports.
As a result, service providers decided that it is much more important to invest in deploying a stable and dependable network, rather than offer many services and ending with networks and deployments not being the right fit, while their networks grow massively and become more difficult to maintain.
A Firewall, what is it?
In today’s Internet era, every organization’s operation runs through an IT system. Whether that is a server, SQL database, or network infrastructure, these systems are accessing the open Internet one way or another. With every growing network and the fact that every organization depends on these systems, they must be made secure and trustworthy.
However, managing security and maintaining stability have been proven the most difficult of all. With security and stability in mind, networking vendors developed firewalls. In the simplest of definitions, firewalls provide essential security to your internal systems. However, firewalls are much more than simple devices. Since security risks have greatly increased in recent years, firewalls have matured as well. Today, they can filter every kind of traffic you can imagine, through different policies which allow network security engineers to play with them based on their requirements. Firewalls have been designed to inspect the millions of packets that traverse your network in an extremely short amount of time. Since we are not able to predict the future and what the future will bring, network security engineers must be able to implement a decent amount of flexibility to adapt to any unknown priorities that will emerge in the future. These challenges have been the focus of one of the biggest networking vendors, Juniper Networks, and the reason for them to produce mission-critical security appliances capable of adapting to tomorrow’s priorities and dangers.
Juniper Networks: from JProtect Newcomers to SRX Experts
Juniper’s beginning in network security started in May 2003 when they introduced their newest accomplishment to the IT world: the JProtect toolkit. This toolkit was developed to provide a single solution capable of protecting your network through the implementation of firewalls, NAT, flow monitoring, and traffic filtering. Over the next couple of years, Juniper managed to purchase a couple of companies and through their joint efforts, they produced decent end-user devices including access points as well as enterprise access routers.
Today, they are most proud of their SRX Series network security appliances and the Advanced Threat Prevention appliances. Depending on the type of solution you are deploying and the requirements you have, Juniper Networks offers many variations of these appliances, capable of protecting any network either small or enterprise-grade.
Juniper’s Dearest, the SRX Series
At the beginning of this article we focused on the security reasons and why firewalls are so important in our networks. In this section, we will focus on the most loved product of Juniper Networks, their SRX Series of networking appliances. We will separate this section based on the deployment needs and the reason behind their design.
Branch SRX Series appliances
The branch set of appliances have been designed for deployment in small offices and remote sites that require an average set of firewall features. This set of appliances has been developed so customers can have more features with less management expenses because the cost of maintaining several different types of equipment is kept to a minimum.
These devices use the Junos operating system which offers simplicity in configuration but a complexity of features. From a firewall point of view, they offer perimeter security, content security, application visibility, tracking and policy enforcement, as well as policy-based VPNs for more complex deployments. Through the trust and un-trust zone configuration, you can simplify which traffic the device trusts and how to handle the traffic in each case. Some of the main features that this type of device offers is:
- Next-generation firewall protection: through a full packet inspection, you can configure a wide variety of security policies based on the application, the source, and destination or the content that is travelling across your network. This means that these types of SRX devices can inspect traffic up to the last level of the OSI model.
- Application Security and IPS: scan and identify the application and its behavior, thus increasing the protection of the network.
- Unified Threat Management (UTM): a comprehensive set of anti-virus, web and content filtering, and anti-spam capabilities that protect your network from malware, phishing attacks, and various intrusions
- Secure routing: this gives the option to choose between router mode and firewall mode operation with a single command. The branch SRX devices by default will check traffic and confirm it’s safe before forwarding it.
Different variations of this group of devices is shown in the image below:
Original Source: www.juniper.net
In addition, Juniper included some other features in the branch SRX series which come in handy for branch offices spread out in different parts of the world. With these features, customers can access their remote LAN network securely and at a low cost. It’s made possible with the Dynamic VPN Client which requires no additional software to be installed on each side.
The branch SRX series can work as firewalls and routers. The ability to modify how the SRX processes traffic is by far the best feature of these SRX devices. You can choose if you want your router to focus on traffic based entirely on the packets or you can manipulate the traffic by its session. This traffic manipulation allows you to configure complex solutions to the remote branch office which was unthinkable in past years.
With all of these features included, the branch SRX series appliances are proven all-round players in the highly dynamic and unpredictable networking “game”.
Data Center SRX Series appliances
The data center SRX devices are highly modular devices that provide high speed and scalability options suitable for some of the biggest Service Providers in the world. By default, these devices don’t have the necessary power, however this is achieved with the help of many different modules which will add the necessary power and processing capability. This type of modular operation eventually cuts the cost of the initial deployment. A neat option which Juniper implemented is the almost identical chassis and internal components that make simple the ability to migrate from one device to the other.
This group of devices counts three separate main chassis: SRX1000, SRX3000, and the SRX5000. Designed for the smallest of deployments is the Juniper SRX1000. The next in line SRX3000 (figure 1) is a more configurable midsized device designed for medium-sized deployments. The device designed for large scale deployments is the SRX5000 (figure 2) and this device can scale up to an extreme level. A unique feature of these devices is the option to configure them and manipulate their features, creating a unique device specific to your needs. You can play with the features because of the modular approach, add more processing power, or add more security options in expense of lower throughput. Some of the key features that made this group of devices award winning are:
- Comprehensive security features: these features provide a multi-gigabit firewall operation capable of scanning large amounts of traffic through the smallest details.
- Express Path Optimization: this feature allows the SRX to optimize the bandwidth by successfully identifying and choosing the optimal traffic flow.
- Scalability: this group of SRX devices has the option to scale and segment the network based on the network requirements. Together, with the Robust Engine, which separates data and control planes to allow deployment of consolidated routing and security devices, the SRX is the optimal for securing your large network.
Figure 1: Original Source: https://www.juniper.net/us/en/products-services/security/srx-series/srx3600/
Figure 2: Original Source: http://www.networkscreen.com/SRX5600.asp
A specific feature that separates these devices from the branch series is the ability to operate in dedicated mode. This is made possible with the incorporated high performance and flexible processors which can be modified based on the requirements. This allows the router to focus on the intrusion detection for maximum security.
The main “culprit” for this capability is the main SPC (Services Processing Card). The SPC is in fact the processor that handles all the traffic processing including the firewalling, NAT, and VPN traffic. Each SPC can contain one or more SPUs (Service Processing Units) and each of those provides a separate, and often extreme, processing power. In a matter of fact each of these SPUs can run up to 32 parallel tasks simultaneously. Given the fact that engineers love numbers let’s put this into perspective a bit more. Each SPU can process:
- 10 Gbps of Firewall throughput
- 2.5 Gbps of VPN throughput
- 1,100,000 packets every second
- 2.5 Gbps of IPS throughput
This is a massive amount of power in the hands of the data center. However, this power can multiply by adding additional processing card modules which help when the router is over-configured with new services.
Another important piece of hardware in the data center SRX series is the NPU (Network Processing Unit). This unit is responsible for balancing packets once they enter the device by forwarding the packet to the correct SPU handling that session. The NPU is capable of processing around 6.5 million packets per second inbound and about 16 million packets outbound. This unit has hidden security feature as well. It is responsible for much of the packet inspection functions which detect the packets and isolate the intrusion.
Since the initial idea of Juniper was to provide massive scaling in a single device, the SPU and NTU are the main points to scale. By scaling the SPU, you allow for more traffic to be processed and by scaling the NTU, you eventually allow for more traffic to enter the router. This offers you the possibility of deploying massive network with security and stability run by the minimum number of devices.
All of Juniper’s SRX devices are next-generation offerings that enable Service Provider level of processing power and reliability. With the many possible options and modules and most of the features being shared across the platforms, you will have a unique experience when deploying and maintaining these devices.
In this post, Juniper proved that they are more than capable to fight for the top with the already globally recognized networking vendors. With this approach, we simply can’t wait to see what the future holds for us.
Thank you to Bojan Janevski for his contribution to our blog. | <urn:uuid:1b0ca8d0-0cd7-49d9-b955-3f2ed4470090> | CC-MAIN-2022-40 | https://indeni.com/blog/overview-of-juniper-security-appliances/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00502.warc.gz | en | 0.947235 | 2,407 | 2.578125 | 3 |
The clustering factor is a measure of the ordered-ness of an index in comparison to the table that it is based on. It is used to check the cost of a table lookup following an index access (multiplying the clustering factor by index’s selectivity gives you the cost of the operation).
The clustering factor records the number of blocks that will be read when scanning the index. If the index being used has a large clustering factor, then more table data blocks have to be visited to get the rows in each index block (because adjacent rows are in different blocks). If the clustering factor is close to the number of blocks in the table, then the index is well ordered, but if the clustering factor is close to the number of rows in the table, then the index is not well ordered. The clustering factor is computed by the following (explained briefly):
- The index is scanned in order.
- The block portion of the ROWID pointed at by the current indexed valued is compared to the previous indexed value (comparing adjacent rows in the index).
- If the ROWIDs point to different TABLE blocks, the clustering factor is incremented (this is done for the entire index).
The CLUSTERING_FACTOR column in the USER_INDEXES view gives an indication as to how organized the data is compared to the indexed columns. If the value of the CLUSTERING_FACTOR column value is close to the number of leaf blocks in the index, the data is well ordered in the table. If the value is not close to the number of leaf blocks in the index, then the data in the table is not well ordered. The leaf blocks of an index store the indexed values as well as the ROWIDs to which they point.
For example, say the CUSTOMER_ID for the CUSTOMERS table is generated from a sequence generator, and the CUSTOMER_ID is the primary key on the table. The index on CUSTOMER_ID would have a clustering factor very close to the number of leaf blocks (well ordered). As the customers are added to the database, they are stored sequentially in the table in the same way the sequence numbers are issued from the sequence generator (well ordered). An index on the CUSTOMER_NAME column would have a very high clustering factor, however, because the arrangement of the customer names is random throughout the table.
The clustering factor can impact SQL statements that perform range scans. With a low clustering factor (relative to the number of leaf blocks), the number of blocks needed to satisfy the query is reduced. This increases the possibility that the data blocks are already in memory. A high clustering factor relative to the number of leaf blocks may increase the number of data blocks required to satisfy a range query based on the indexed column.
The clustering of data within the table can be used to improve the performance of statements that perform range scan–type operations. By determining how the column is being used in the statements, indexing these column(s) may provide a great benefit. | <urn:uuid:afa8f9fd-5993-45ff-84b9-db457aee0746> | CC-MAIN-2022-40 | https://logicalread.com/oracle-11g-clustering-factor-mc02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00502.warc.gz | en | 0.9103 | 652 | 2.75 | 3 |
How to run scripts on Mac?
A script is a programming language used to automate the execution of certain routine or time-consuming tasks. It is a single file that contains a set of commands or instructions to streamline various processes for Mac management. With scripts, the admin can automate processes to perform specific operations, which otherwise has to be executed one-by-one on Apple’s Terminal.
Hexnode UEM allows IT admins to execute custom scripts on Mac devices remotely. The admin can run any scripts to perform system-level configurations on Mac devices without any user interaction. You can shut down/restart devices, install/uninstall apps, push updates, set up app configurations, and so forth. You can enhance Mac management to the next level by configuring extra settings that may not be natively available in MDM’s features stack.
Create and Run a Mac Script
Here, we will show you how to write a sample bash script that allows you to restart your device.
- Open Text Edit from Applications.
- Click New Document and write your script in the text area as below:123#!/bin/sh/sbin/shutdown -r now
- Now, click Format > Make Plain Text to convert the file into a plain text.
- Save the file by clicking on File > Save. Make sure to uncheck the option If no extension is provided, use ‘.txt’ while saving the file. Also, note the file name and location of the file.
- Next, navigate to the folder where you have saved the file. Right-click on the file and click Get Info. Unlock the bottom padlock present on the opened info window.
- Open Terminal and type the following:1cd (location of the script)
For example, if you have saved the file in Documents, type as follows:1cd Documents </li>
- Next, convert the file into an executable file. Type in1chmod 700 (filename)
If the file name is SampleScript, type as follows:1chmod 700 SampleScript
- Next, type in ./(filename) to restart your device. In this case, type as follows:1./SampleScript
Your device will restart once the script is executed.
Run Scripts on Mac using Hexnode UEM
To execute a custom script on Mac,
- From your Hexnode portal, navigate to Manage > Devices.
- Click on the Mac device you need to deploy scripts.
- Click Actions > Execute Custom Script.
- Specify the below parameters to execute the script on that device:
- Click the Execute button to confirm your action.
- Now, navigate to the Action History sub-tab to view the execution status of the script. You can see the output details by clicking on the Show Output button corresponding to the execution status of that script.
The script, if coded correctly, will be executed successfully to automate the specified operations on the device. | <urn:uuid:4758fe5a-66e7-4260-ae3e-1ac86ff2fdf5> | CC-MAIN-2022-40 | https://www.hexnode.com/mobile-device-management/help/how-to-run-scripts-on-mac-using-hexnode-mdm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00502.warc.gz | en | 0.832026 | 620 | 2.71875 | 3 |
NASA has launched a competition in search of a new lunar soil collection tool for a certain digging robot under development.
The Regolith Advanced Surface Systems Operations Robot or RASSOR requires a new bucket drum for storage of lunar soil, also known as regolith, the space agency said March 17.
NASA's Space Technology Mission Directorate calls for the masses to submit designs for a bucket drum that is shaped to accommodate more amounts of regolith. The contest offers a total prize pool of $7K.
The agency considers the use of RASSOR units for lunar excavation efforts under the Artemis program that aims to revive manned exploration.
Interested parties may submit design proposals through April 20. NASA will select five winners. GrabCAD, a 3D model community website, serves as the contest's host.
RASSOR is under testing at Florida-based Kennedy Space Center. | <urn:uuid:7d73eb5e-504a-4799-92c2-df1a387c7a13> | CC-MAIN-2022-40 | https://executivegov.com/2020/03/nasa-seeks-designs-for-lunar-soil-collection-drum/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00502.warc.gz | en | 0.924771 | 176 | 2.609375 | 3 |
Passwordless authentication and multi-factor authentication aren’t just IT buzzwords anymore; they are a part of everyday life. Today, the average person unlocks their phone with facial recognition, logs into work applications with an authenticator app, or views sensitive documents using a PIN they received in a text message.
And the technology won’t stop there 一 both MFA and passwordless solutions are growing at a staggering rate. The global MFA market size is projected to grow to $23.5 billion by 2026, and the global passwordless authentication market size is estimated to reach $456.79 billion by 2030.
But for all the hype around passwordless authentication and MFA, there is still confusion around the purpose, efficacy, and challenges of each security protocol.
What Is Passwordless Authentication?
Passwordless authentication is exactly what it sounds like 一 confirming a user’s identity without the use of a password. It may sound too good to be true — you might even ask, “Is passwordless authentication safe?” — but the reason it works is that your identity doesn’t have to be verified via a knowledge factor like a password.
You can prove your identity by presenting a part of your body (a biometric factor) or an access code or link you received on a device or app that you own (a possessive factor) instead; methods that have been tested and implemented in a variety of ways.
As you can imagine, passwordless authentication is popular among employees because they no longer have to memorize long, complicated passwords. Instead, they log into software using something they can’t forget, like their fingerprint or phone.
Passwordless authentication also makes things easier on IT. They don’t need to store passwords, send password reset reminders, or monitor possible security incidents due to password breaches. And with no passwords to guess or steal, cybercriminals have a much harder time collecting the data they want.
What Is Multi-Factor Authentication?
Multi-factor authentication (MFA) is a digital identity verification system that requires users to pass several authentication checkpoints. MFA is similar to passwordless authentication in that it can leverage biometric or possessive factors, but the difference is that MFA still uses usernames and passwords.
To log into systems configured with MFA, you enter your username and password as you normally would. Then, you’re prompted to show or enter something else, like a one-time access code sent through an authenticator app, a magic link sent to your email, or fingerprint. Once you pass those mini-tests, you’re logged in.
You can think of MFA as a door with a lock, retinal scan, and passcode on it. Like a password, the lock might be simpler to pick, but replicating a retinal scan or hacking the device receiving a one-time passcode is extremely difficult. Having multiple layers of protection severely limits the damage criminals can do.
The Differences Between MFA and Passwordless Authentication
While passwordless authentication has some similarities with multi-factor authentication, it also has some distinct differences in terms of authentication, security, ease of use, scalability, and cost.
MFA increases an organizations’ confidence that someone is who they say they are by adding extra authentication factors on top of a password. For example, an MFA-based system might prompt a user to type in their password, then use voice recognition as a secondary authentication factor, and utilize a one-time password as a third authentication factor.
Passwordless authentication removes the need for a password entirely, replacing it with a possessive or biometric factor. In the example above, someone might authenticate only using voice recognition.
There’s no doubt that both MFA and passwordless authentication bring an added level of security to your organization, but they do have limitations. Since MFA systems use a username and password as the primary authentication method, they are susceptible to phishing and brute force attacks. Second or third authentication methods may block cybercriminals from getting much further, but they need to be airtight to prevent a full-blown attack.
Even passwordless authentication can fall prey to trojan horse, man-in-the-browser, or malware attacks if one-time passwords or magic links get intercepted. And, although rare, attackers have recreated people’s fingerprints and voices to circumvent biometric authentication.
Ease of use
Passwordless authentication is typically considered faster and more convenient than MFA. Users don’t have to commit passwords to memory and only have to use one method of authentication. MFA is more time-consuming and more time-sensitive (some codes expire in as little as 10 seconds), which can lead to employee frustration 一 particularly if they are logging into multiple applications per day.
At the same time, biometric and possessive authentication factors used with passwordless authentication aren’t always user-friendly. For instance, an employee who receives private keys via a USB drive has to carry the device with them at all times, and can’t log into any applications if the USB gets damaged or lost. The ability to read fingerprints and faces can also vary depending on the sophistication of your scanners.
Cost and scalability
Implementing passwordless authentication is a big undertaking and a big expense. Selecting the right software, picking authentication methods, installing new devices, creating a project plan, and dealing with change management are just a few of the many components of a passwordless authentication project.
MFA, on the other hand, can be as simple as asking employees to download an authenticator app or register their email to receive magic links.
Best of Both Worlds
Since passwordless authentication is arguably more secure but takes longer to implement, many companies use MFA first. Not only does this get users accustomed to various authentication methods, but it also gives the IT department time to craft a comprehensive project plan.
Once everyone feels comfortable and ready, the organization moves on to a fully passwordless environment. Some organizations take this a step further, combining both methods into passwordless MFA.
But using just any MFA solution may not be the best jumping-off point for passwordless authentication. JumpCloud’s environment-wide multi-factor authentication is easy for your end users to use, and even easier for you to set up. With the click of a button, you can enable MFA to restrict access to networks, applications, devices, and more.
You can also choose the best authentication methods for your company, whether it’s push notifications, universal second factor (U2F), or even TOTP MFA. The best part is that when JumpCloud MFA is enabled, it works across your entire organization 一 regardless of where employees are working.
To learn more about what makes JumpCloud’s MFA product the best foundation for a fully passwordless future, request a free demo today. | <urn:uuid:17354e4a-6560-4a66-8d24-5c8893c064dc> | CC-MAIN-2022-40 | https://jumpcloud.com/blog/passwordless-authentication-vs-multi-factor-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00502.warc.gz | en | 0.930076 | 1,443 | 2.65625 | 3 |
A new study by researchers from Monash University has alarmingly found that six months after recovering from COVID-19 critical illness, one in five people had died, and almost 40% of survivors had a new disability.
The study findings were published in the peer reviewed journal: Critical Care. https://ccforum.biomedcentral.com/articles/10.1186/s13054-021-03794-0
In this nationally representative cohort of critically ill patients with COVID-19, at 6 months, 26.9% patients died and 38.9% survivors reported new disability. In survivors, disability was widespread across all areas of functioning.
There was a significant decrease in health-related quality of life, with over one third of the cohort reporting new problems in mobility, usual activities and pain. In addition, one third of survivors had cognitive impairment and one fifth of the cohort reported anxiety, depression and/or PTSD.
More than one in ten survivors were unemployed due to poor health. Higher severity of illness and the clinical frailty score were independent predictors of death or new disability. The majority of our cohort (70%) had ongoing symptoms of COVID-19 at 6 months, most commonly shortness of breath, weakness or fatigue.
While the long-term effects of critical illness are well-recognized [23, 29, 30], the scope and scale of “Long COVID” may be greater than previously described in survivors of COVID-19. During the COVID-19 pandemic, the Australian government enacted several healthcare policies that may have influenced the characteristics and outcomes of this group of patients compared with patients from other countries.
Australia has a liberal testing policy, and until November 2020, Australia has conducted 9,670,186 COVID-19 tests, representing 377,527 tests per 1,000,000 population and with a positive rate of 0.3% . As a comparison, the entire USA has a rate of 533,967 tests per 1,000,000 population, with a positive rate of 6.9% .
The healthcare system in Australia has not been overwhelmed due to COVID-19, and the outcomes of our survivors represent a cohort provided with care from a critical care system operating within capacity . Despite this, the present data are similar to other data of COVID-19 patients in intensive care in terms of age, comorbid conditions and ARDS severity [15, 32], and suggest that COVID-19 survivorship was associated with substantial new disability and reduced health-related quality of life.
Increased disability, both in the number of patients and in the severity of functional limitations, are associated with increased caregiver burden, unemployment, psychological problems, mortality and healthcare costs [23, 29, 30]. Patients should be screened at hospital discharge for new functional impairments. Outpatient follow-up should be recommended early, within the first few weeks of discharge.
It should include medication optimization and screening for physical, psychological or cognitive problems, with referral for additional services such as physical therapy or psychology as required . In the present study, new disability was present in all areas of function, particularly emotionally (such as anxiety, depression, PTSD) and walking.
We used the WHODAS, a validated outcome measure for disability, grounded in the framework of the International Classification of Functioning (ICF) which has previously been used to described the critically ill population with a defined minimum clinically important difference [23, 24, 33, 34]. The baseline disability of this relatively young cohort was very low prior to COVID-19, and there was a clinically significant increase in the level of disability at 6 months.
Recently, the COMEBAC Investigators have reported the 4-month outcomes of 478 hospitalized patients with COVID-19 in a single center in France . Of this cohort, 142 had been critically ill and approximately 50% had been mechanically ventilated, similar to the present study. New onset dyspnea was one of the most common symptoms, and lung CT scan in survivors showed persistent abnormalities in 75% who had received invasive ventilation.
Similarly, in a recent single-center cohort study in China, nearly one third of the 122 critically ill patients with COVID-19 had a mean 6-min walking distance less than the lower limit of the normal range at 6 months after hospitalization . In addition, 56% had diffusion impairment on pulmonary function tests. The results of both these studies are aligned with the high prevalence of shortness of breath in survivors of our cohort.
Pulmonary rehabilitation in patients with ongoing shortness of breath may improve outcomes and reduce symptoms . Further, pulmonary rehabilitation may be delivered by telehealth [37,38,39], improving the access to care during a pandemic.
The strengths of this study include its prospective, multicenter design with collection of detailed clinical and physiologic parameters. We included baseline measures of frailty, health-related quality of life, disability and comorbidities to distinguish new disability and new problems. The outcome measures include validated, reliable measures of function, most of which are in a core outcome set for survivors of acute respiratory failure .
We acknowledge limitations to our study. A proportion of eligible patients were not available for follow-up assessment, mainly due to loss to follow-up. This was higher than similar studies of disability at 6 months from our group, and we speculate that it may be due to stigma or psychological distress associated with a positive diagnosis of COVID-19 which should be investigated further in future studies.
We contacted primary practitioners and reviewed national online resources for death notices to ensure they were not deceased. The responders had similar baseline characteristics and interventions to the non-responders, and it is likely a good representation of the overall cohort. Baseline disability and health-related quality of life were measured retrospectively in survivors, which may introduce recall bias.
There was no control group, and the outcomes of survivors of COVID-19 critical illness from this study may be similar to disability reported after critical illness from other cohorts [23, 25]. We did not conduct in-person assessments or radiological tests as part of the follow-up which would improve the understanding of sequelae of COVID-19.
TERMINOLOGY AND STAGES OF RECOVERYThe recovery process from COVID-19 exists on a continuum; early in the course of acute COVID-19, management is focused on detecting and treating acute COVID-19-related complications, while after recovery from the acute phase, some patients require evaluation and management for persistent or new symptoms.
Although there are no widely accepted definitions of the stages of COVID-19 recovery, we generally agree with the following categories as proposed by the Centers for Disease Control and Prevention (CDC) :
●Acute COVID-19 – Symptoms of COVID-19, up to four weeks following the onset of illness.
●Post-COVID conditions – Broad range of symptoms (physical and mental) that develop during or after COVID-19, continue for ≥2 months (ie, three months from the onset), and are not explained by an alternative diagnosis.
These stages reflect symptomatic recovery and are not related to active viral infection and infectivity. (See “COVID-19: Epidemiology, virology, and prevention”, section on ‘Viral shedding and period of infectiousness’.)
Several other terms have been used to describe prolonged symptoms following COVID-19 illness, such as “long COVID,” “post-acute sequelae of SARS-CoV-2 infection (PASC),” “post-acute COVID-19,” “chronic COVID-19,” and “post-COVID syndrome” [8-12]. Despite the creation of case definitions, there are no widely accepted clinical diagnostic criteria for “long COVID” . However, as of October 1, 2021, there is a new International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10) for unspecified post-COVID conditions, which is U09.9, which was approved by the CDC.
Whether the constellation of symptoms and persistent issues experienced by these patients represents a new syndrome unique to COVID-19 or if there is overlap with the recovery from other infectious and critical illnesses has not been determined. The symptoms reviewed in this topic refer to those seen in any patient, including those recovering from mild, moderate, and severe (including critical) illness.
The World Health Organization has also created a global COVID-19 clinical platform case report form for clinicians and patients to collect and report information, to allow for better understanding of the spectrum of post-COVID-19 conditions and recovery .
The United States Department of Health and Human Services and the Department of Justice released a guidance statement on “long COVID” as a disability under the Americans with Disabilities Act, the Rehabilitation Act of 1973, and the Patient Protection and Affordable Care Act.
These acts provide protections for individuals with disabilities to allow for full and equal access to civic and commercial life. This statement classifies “long COVID” as a disability if it substantially limits, either physically or mentally, one or more major life activities. An individualized assessment is needed to determine whether a person’s symptoms fit these criteria .
PREVENTION OF POST-COVID CONDITIONSThe most effective means by which to prevent post-COVID conditions also is to prevent COVID-19 (eg, social distancing, masking, hand hygiene, and vaccination). It is likely that any measure that decreases the incidence or severity of acute COVID-19 infection will in turn decrease the incidence and severity of post-COVID conditions.
A case-control study found that both symptom intensity in the first week of illness and persistent symptoms, defined as symptoms at 28 days or more, were significantly less common among those who developed postvaccination SARS-CoV-2 infection compared with unvaccinated cases . In addition, those who were vaccinated were more likely to be asymptomatic.
Persistent symptoms — Persistent physical symptoms following acute COVID-19 are common and typically include fatigue, dyspnea, chest pain, and cough. Patients recovering from COVID-19 may also have additional psychological (eg, anxiety, depression, posttraumatic stress disorder [PTSD]) and cognitive (eg, poor memory and concentration) symptoms, similar to the syndrome experienced by patients recovering from other critical illnesses known as post-intensive care syndrome (PICS). This syndrome is discussed in detail separately.
Studies on the prevalence of persistent symptoms are often limited by lack of control groups and surveillance, reporting, nonresponse, and selection biases.
Prolonged symptoms can follow mild or severe illness and include:
●Physical symptoms – Several observational series describe persistent symptoms in patients following acute COVID-19 with one-third or more experiencing more than one symptom (table 1) [1,4,5,17-52]. Common persistent physical symptoms include:
•Fatigue (13 to 87 percent)
•Dyspnea (10 to 71 percent)
•Chest pain or tightness (12 to 44 percent)
•Cough (17 to 34 percent)
Less common persistent physical symptoms include anosmia, joint pain, headache, sicca syndrome, rhinitis, dysgeusia, poor appetite, dizziness (from orthostasis, postural tachycardia, or vertigo), myalgias, insomnia, alopecia, sweating, and diarrhea.
●Psychological or cognitive – Psychological and cognitive complaints are also common during recovery from acute COVID-19 and may be seen more commonly than in those recovering from similar illnesses [1,4,22,28,29,34,38,48,49,52-55]. In one study of 100 patients with acute COVID-19 who were discharged from the hospital, 24 percent reported PTSD, 18 percent had new or worsened problems with memory, and 16 percent had new or worsened problems with concentration; numbers were higher among patients admitted to the intensive care unit (ICU) .
In other studies, almost one-half of COVID-19 survivors reported a worsened quality of life [1,24], 22 percent had anxiety/depression , and 23 percent of patients were found to have persistent psychological symptoms at three months . Among ICU survivors, another study reported anxiety in 23 percent, depression in 18 percent, and posttraumatic symptoms in 7 percent . In a prospective cohort study of nonhospitalized Ecuadorian patients with COVID-19, most of whom had mild disease, 21 percent had memory impairment as evidenced by a four-point decrease in their Montreal Cognitive Assessment (MoCA) scores .
Psychological complaints may be seen more commonly than in those recovering from similar illnesses. As an example, a retrospective examination of electronic health records in the United States reported that the risk of developing a new psychiatric illness following COVID-19 was higher compared with those recovering from other medical illnesses such as influenza .
●PICS – Among ICU survivors, one single center analysis reported that over 90 percent of individuals with COVID-19 suffered from at least one component of PICS . Another prospective observational study found that 9.9 percent of individuals who were discharged from the ICU with COVID-19 developed critical illness polyneuropathy or myopathy versus 3.4 percent of other patients discharged from the ICU . Details regarding the identification and management of PICS and weakness related to critical illness are discussed separately.
Persistent symptoms can affect functional ability [23,28,34,38,46,58]. As examples:
●In one retrospective study of approximately 1300 hospitalized COVID-19 patients discharged to home, despite home health services, only 40 percent of patients were independent in all activities of daily living (ADLs) at 30 days .
●In another study, almost 40 percent of patients were unable to return to normal activities at 60 days following hospital discharge .
●In another study of 219 patients who were hospitalized with COVID-19, 53 percent had limited functional impairment (as measured by the Short Physical Performance Battery [SPPB] score and two-minute walking test) at four months .
Whether symptoms can develop after initial asymptomatic infection is unknown. Limited data from self-reporting questionnaires, subgroup analyses of larger observational studies, and health care claim databases (some of which are not peer-reviewed), suggest that a small proportion of patients with asymptomatic COVID-19 subsequently report post-COVID symptoms (eg, fatigue) [59-61]. Further data are needed to clarify the scope of post-COVID symptoms in this population.
Limited data suggest a lower prevalence of persistent symptoms in children, although data are sparse. One retrospective study of children and adolescents (median age 11 years) described at least one symptom lasting beyond 12 weeks in 4 percent of the study cohort .
The most frequently reported symptoms were tiredness (3 percent) and poor concentration (2 percent). (See “COVID-19: Clinical manifestations and diagnosis in children”, section on ‘Clinical course’.)
Persistent symptoms do not appear to worsen (and may improve) following the administration of the SARS-CoV-2 vaccine. This was illustrated in one study of 163 patients who had a heavy burden of post-COVID symptoms at eight months who subsequently received the Pfizer-BioNTech (BNT162b2) or Oxford-AstraZeneca (ChAdOx1nCoV-19) vaccine . One month after vaccination, symptoms that existed prior to vaccination in the majority of patients had either improved or remained unchanged, while only 5 percent had worsened.
Expected recovery time course — The time to symptom resolution appears to depend upon premorbid risk factors as well as the severity of the acute illness and spectrum of symptoms experienced by the patient [1,2,63-66]. However, despite early data suggesting a shorter recovery (eg, two weeks) for those with mild disease and a longer recovery (eg, two to three months or longer) for those with more severe disease [67,68], there is wide variability in time to symptom resolution.
Early data suggested a longer recovery course in patients requiring hospitalization, older patients with preexisting comorbidities, patients who experienced medical complications (eg, secondary bacterial pneumonia, venous thromboembolism), and patients who had a prolonged stay in the hospital or ICU [1,2,17,22,49,69]. However, subsequent data suggest that even patients with less severe disease who were never hospitalized, including those with self-reported COVID-19, have often reported prolonged and persistent symptoms [4,5,17,36,64,70].
●Hospitalized patients (moderate to severe COVID-19) – Data suggest that a significant proportion of patients who are admitted with acute COVID-19 experience symptoms for at least two months and even longer (eg, up to 12 months) following discharge (52 to 87 percent) [1,22,27,49,58,71].
•In an observational study of 1600 patients in United States hospitals with acute COVID-19, at 60 days after discharge, 33 percent reported persistent symptoms and 19 percent reported new or worsening symptoms . The most common symptoms included dyspnea with stair climbing (24 percent), shortness of breath/chest tightness (17 percent), cough (15 percent), and loss of taste or smell (13 percent).
•In a study including approximately 1700 patients previously hospitalized with COVID-19 in Wuhan, China, at six months, 74 percent continued to experience one or more symptoms. Fatigue or muscle weakness (63 percent), sleep difficulties (26 percent), dyspnea (26 percent), and anxiety or depression (23 percent) were among the most commonly reported persistent symptoms .
In a follow-up study of the same cohort, although the proportion of patients with at least one symptom had improved at 12 months, 49 percent of patients remained symptomatic . Fatigue or muscle weakness remained the most common symptom (20 percent), but the proportion of patients with dyspnea (30 percent) and anxiety (26 percent) increased slightly.
●Outpatients (mild COVID-19) – Data also suggest that a significant proportion of patients with mild disease may experience symptoms for up to several months, if not longer, following acute illness [5,36,38,70-72].
•In a telephone survey of 292 outpatients with COVID-19, one-third had not returned to baseline health by three weeks . Younger patients were less likely to have residual symptoms compared with older patients (26 percent among those 18 to 34 years versus 47 percent of those >50 years). In addition, an increasing number of medical comorbidities was associated with prolonged illness among all age groups. Young and healthy patients with mild disease typically recovered sooner, while patients with multiple comorbidities had a more prolonged recovery.
•In a study of 410 Swiss outpatients with mild illness, 39 percent reported persistent symptoms seven to nine months following initial infection. The most common symptoms included fatigue (21 percent), loss of taste or smell (17 percent), dyspnea (12 percent), and headache (10 percent) .
•In a prospective study, 177 patients recovering from acute COVID-19 (16 inpatients and 161 outpatients, 11 of whom had asymptomatic infection) were followed for an average of six months after acute illness . Of the outpatients with symptomatic infection, 19 percent had one to two persistent symptoms at six months, 14 percent had ≥3 persistent symptoms, and 29 percent reported a decreased quality of life. The most common reported persistent symptoms were fatigue, loss of sense of taste or smell, and dyspnea.
•In a Swedish survey of over 300 health care workers with mild disease, 26 percent had at least one moderate or severe symptom lasting more than two months, compared with 9 percent of seronegative control patients . A higher proportion also had symptoms lasting longer than eight months (15 versus 3 percent). Approximately 8 to 15 percent reported that their symptoms interfered with their work, social, or home life compared with 4 percent of seronegative control patients.
Some symptoms resolve more quickly than others. For example, fevers, chills, and olfactory/gustatory symptoms typically resolve within two to four weeks, while fatigue, dyspnea, chest tightness, cognitive deficits, and psychological effects may last for months (eg, 2 to 12 months) [1,4,5,18-22,25,36,49]. Data regarding individual symptoms are included below:
●Fatigue, weakness, and poor endurance – Fatigue is by far the most common symptom experienced by patients regardless of the need for hospitalization. Although the fatigue resolves in most patients, it can be profound and may last for three months or longer, particularly among ICU survivors [1,4,22,73].
●Dyspnea – In patients with COVID-19 and dyspnea, the shortness of breath may persist, resolving slowly in most patients over two to three months, sometimes longer (eg, up to 12 months) [4,22,49,74-76].
●Chronic cough – In several studies, many patients experienced persistent cough at two to three weeks following initial symptoms . Cough resolved in the majority of patients by 3 months and rarely persisted by 12 months .
●Chest discomfort – Among patients with COVID-19, chest discomfort is common and may resolve slowly. Chest discomfort persists in 12 to 22 percent of patients approximately two to three months after acute COVID-19 infection, rarely longer [1,4,49].
●Altered taste and smell – Several studies have examined the recovery of olfactory and gustatory symptoms in COVID-19 patients [18-21,58,77,78]. The majority have complete or near-complete recovery at one month following acute illness, although in some studies these symptoms persisted longer . Patients with hyposmia and male patients may recover more rapidly compared with those who have anosmia or are female [19,21].
●Neurocognitive symptoms – Data suggest that concentration and memory problems persist for six weeks or more in COVID-19 patients after discharge from the hospital .
●Psychological – Observational studies report that psychological symptoms (eg, anxiety, depression, PTSD) are common after acute COVID-19 infection, with anxiety being the most common. In general, psychological symptoms improve over time but may persist for more than six months for a subset of survivors. Those hospitalized are likely at greater risk for persistent psychological symptoms [4,22,23,49,53,79]. (See “COVID-19: Psychiatric illness”, section on ‘Patients critically ill with COVID-19’.)
Risk of rehospitalization — Most patients hospitalized with COVID-19 are successfully discharged, although approximately 10 to 20 percent require rehospitalization within 30 and 60 days, respectively [23,35,58,80,81]. As examples:
●In a retrospective study of over 100,000 patients admitted to United States hospitals with COVID-19, among those who were discharged, 9 percent were rehospitalized within two months to the same hospital . Among those readmitted, 1.6 percent had multiple hospital readmissions. The median time for first readmission was eight days. Risk factors for rehospitalization included age ≥65 years, discharge to skilled nursing facility (SNF) or with home health services, or the presence of one or more comorbidities (ie, chronic obstructive pulmonary disease, heart failure, diabetes mellitus with complications, chronic kidney disease, and/or a body mass index [BMI] ≥30 kg/m²).
●In another retrospective cohort of 1409 patients admitted with COVID-19, 10 percent were rehospitalized. Risk of rehospitalization or death was higher among male patients, White patients, and those with heart failure, diabetes, frequent emergency department visits within the previous six months, daily pain, cognitive impairment, or functional dependency .
●In another study of 1775 patients discharged following COVID-19, 20 percent were readmitted within 60 days ; readmissions were associated with older age. Common readmission diagnoses were COVID-19 (30 percent), sepsis (8.5 percent), pneumonia (3.1 percent), and heart failure (3.1 percent). Over 20 percent required ICU admission, and the mortality was 9 percent. Rates of readmission or death were highest during the first 10 days following discharge.
●In a United Kingdom study of nearly 50,000 patients who were discharged following an admission with COVID-19, 30 percent were readmitted and 10 percent died after discharge . There were higher rates of respiratory disease, diabetes, and cardiovascular disease in patients discharged following COVID-19 compared with patients discharged with non-COVID diagnoses.
GENERAL EVALUATIONPatients recovering from COVID-19 range from those with mild illness not requiring medical attention to those with severe illness requiring prolonged critical care support.
Several organizations have developed guidelines to address the evaluation and management of patients recovering from COVID-19, and many institutions have established dedicated, interdisciplinary outpatient COVID-19 recovery clinics to address the long-term needs of patients after recovery from acute illness [6,9,83-94]. Given the unknown long-term sequelae of those with persistent symptoms following COVID-19, clinic protocols generally include a comprehensive physical, cognitive, and psychological assessment. High quality data on the outcomes of these evaluation and management strategies are lacking. Care should not be delayed if patients experience a long wait time for evaluation in a dedicated COVID-19 recovery clinic; referral to pulmonary, neurology, and/or physical medicine and rehabilitation specialists may be appropriate if referral to a COVID-19 recovery clinic is unavailable.
Our approach is based upon our clinical experience with patients who have recovered from acute COVID-19, accumulating data on patients with persistent symptoms following acute COVID-19, and data extrapolated from patients recovering from similar illnesses (eg, sepsis) and is consistent with expert advice from international societies and guideline groups [84-90,94-96].
Timing and location of follow-up evaluation — The optimal timing and location of follow-up evaluation for patients who have recovered from acute COVID-19 are unknown and depends upon several factors, including the severity of acute illness, current symptomatology, patient age, risk factors for severe illness (table 2), and resource availability.
The timing and location of follow-up for outpatients during the acute illness (eg, up to two to three weeks following illness onset) is reviewed in detail elsewhere. (See “COVID-19: Outpatient evaluation and management of acute illness in adults”, section on ‘Management and counseling for all outpatients’ and “COVID-19: Outpatient evaluation and management of acute illness in adults”, section on ‘Telehealth follow-up’.)
Our approach to the follow-up of patients after the acute illness has “resolved” (eg, after approximately three to four weeks) is discussed in this section. The recovery process exists on a continuum; follow-up early in the course of acute COVID-19 is focused on detecting and managing acute COVID-19-related complications, while later follow-up focuses on the evaluation and management of persistent symptoms after recovery from the acute phase (see ‘Terminology and stages of recovery’ above). While there is no guidance on timing or location for COVID-19 follow-up after the acute illness, we suggest the following:
●In an otherwise healthy young patient with mild disease not requiring medical intervention or hospitalization and who is improving, we do not routinely schedule a COVID-19 follow-up visit (telemedicine or in-person), unless the patient requests it or has persistent, progressive, or new symptoms.
●In an older patient or a patient with comorbidities (eg, hypertension, diabetes) with mild to moderate acute disease but not requiring hospitalization, we typically schedule a telemedicine or in-person visit approximately three weeks following the onset of illness.
●For patients with more severe acute COVID-19 disease requiring hospitalization (with or without the need for subsequent post-acute care such as inpatient rehabilitation), we ideally follow-up within one week but no later than two to three weeks after discharge from the hospital or rehabilitation facility. We typically use telemedicine visits to facilitate early follow-up given that hospital readmissions may be reduced with early post-discharge follow-up based upon data reported for patients recovering from sepsis .
●For all patients with persistent symptoms, particularly those with multisystem complaints or symptoms lasting beyond 12 weeks, we refer for an evaluation in a specialized outpatient COVID-19 recovery clinic, if available, or a subspecialty clinic relevant to the patient’s specific symptoms.
Assess disease severity, complications, and treatments — During the initial follow-up evaluation, we obtain a comprehensive history of the patient’s acute COVID-19 illness, including the illness timeline, duration and severity of symptoms, type and severity of complications (eg, venous thromboembolism, presence and degree of kidney injury, supplemental oxygen requirements [including the need for noninvasive or invasive ventilation], cardiac complications, delirium), COVID-19 testing results, and initial treatments used. We review hospital and outpatient records and the patient’s medication list. This information is compared with their pre-COVID-19 medical history.
General laboratory testing — The need for laboratory testing in patients who have recovered from acute COVID-19 is determined by the severity and abnormal test results during their acute illness and current symptoms. Most patients who have abnormal laboratory testing at the time of diagnosis improve during recovery .
●For most patients who have recovered from mild acute COVID-19, laboratory testing is not necessary.
●For patients recovering from more severe illness, those with identified laboratory abnormalities, patients who were discharged from hospital or an inpatient rehabilitation facility, or for those with unexplained continuing symptoms, it is reasonable to obtain the following:
•Complete blood count
•Blood chemistries, including electrolytes, blood urea nitrogen (BUN) and serum creatinine
•Liver function studies, including serum albumin
●Additional laboratory tests that might be appropriate for select patients include:
•Brain natriuretic peptide (BNP) and troponin in patients whose course was complicated by heart failure or myocarditis or in those with possible cardiac symptoms from covert myocarditis (eg, dyspnea, chest discomfort, edema).
•D-dimer in patients with unexplained persistent or new dyspnea or in any patient in whom there is a concern for thromboembolic disease.
•Thyroid studies in those with unexplained fatigue or weakness.
•Antinuclear antibody and creatinine kinase in patients with arthralgias, myalgias, or other symptoms concerning for rheumatologic disorders.
We generally do not monitor coagulation parameters (eg, fibrinogen, fibrinogen degradation products, activated thromboplastin time, international normalized ratio, and D-dimer levels) or inflammatory markers (eg, erythrocyte sedimentation rate, C-reactive protein, ferritin, interleukin-6) to resolution.
COVID-19 testing and serology — We do not routinely re-test patients for active infection with SARS-CoV-2 at the time of follow-up outpatient evaluation. Instead, we follow a non-test-based approach to removing infectious precautions. This approach is supported by the World Health Organization and the Centers for Disease Control and Prevention (table 3). (See “COVID-19: Infection control for persons with SARS-CoV-2 infection”, section on ‘Discontinuation of precautions’ and “COVID-19: Diagnosis”, section on ‘Persistent or recurrent positive NAAT during convalescence’ and “COVID-19: Epidemiology, virology, and prevention”, section on ‘Viral shedding and period of infectiousness’ and “COVID-19: Epidemiology, virology, and prevention”, section on ‘Immune responses following infection’.)
In addition, there is no clinical utility in obtaining SARS-CoV-2 serology (antibodies) in patients who had their acute infection documented by a positive molecular test (ie, nucleic acid amplification test [NAAT], reverse transcriptase polymerase chain reaction [RT-PCR] test) or antigen test. However, for patients with prior COVID-19 based upon symptoms but without a documented positive molecular or antigen test, the value of obtaining SARS-CoV-2 serology is unclear. Regardless, we sometimes obtain serology to guide additional testing or decision-making (eg, convalescent plasma donation, evaluation of unexplained symptoms). | <urn:uuid:de3c4410-91f1-492f-a54f-b22611f32827> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/19/new-report-on-functional-impairment-following-critical-illness-from-covid-19/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00502.warc.gz | en | 0.942471 | 7,147 | 2.6875 | 3 |
With the exponential growth in communications, caused largely by the wide acceptance of the Internet, many carriers have found their estimates of fiber needs have been highly underestimated. Although most cables included many spare fibers when installed, this growth has used many of them and new capacity is required. Make use of a number of ways to improve this problem, eventually the WDM has shown more cost effective in most cases.
Wave Division Multiplexing (WDM) enables multiple data streams of varying wavelengths (“colors”) to become combined right into a single fiber, significantly enhancing the overall capacity from the fiber. WDM can be used in applications where considerable amounts of traffic are needed over long distance in carrier networks. There’s two types of WDM architectures: Course Wave Division Multiplexing (CWDM) and Dense Wave Division Multiplexing (DWDM).
WDM System Development History:
A WDM system uses a multiplexer in the transmitter to become listed on the signals together, and a demultiplexer at the receiver to separate them apart. With the right type of fiber it is possible to have a device that does both simultaneously, and can work as an optical add-drop multiplexer. The optical filtering devices used have conventionally been etalons (stable solid-state single-frequency Fabry¡§CP¡§|rot interferometers by means of thin-film-coated optical glass).
The idea was first published in 1980, and by 1978 WDM systems appeared to be realized in the laboratory. The first WDM systems combined 3 signals. Modern systems are designed for as much as 160 signals and can thus expand a fundamental 10 Gbit/s system over a single fiber pair to in excess of 1.6 Tbit/s.
WDM systems are well-liked by telecommunications companies because they allow them to expand the capacity of the network without laying more fiber. By utilizing WDM and optical amplifiers, they can accommodate several generations of technology rise in their optical infrastructure without needing to overhaul the backbone network. Capacity of a given link can be expanded by simply upgrades towards the multiplexers and demultiplexers at each end.
This is often made by use of optical-to-electrical-to-optical (O/E/O) translation in the very edge of the transport network, thus permitting interoperation with existing equipment with optical interfaces.
WDM System Technology:
Most WDM systems operate on single-mode fiber optical cables, which have a core diameter of 9 µm. Certain forms of WDM may also be used in multi-mode fiber cables (also referred to as premises cables) which have core diameters of fifty or 62.5 µm.
Early WDM systems were expensive and complicated to operate. However, recent standardization and better understanding of the dynamics of WDM systems make WDM less expensive to deploy.
Optical receivers, as opposed to laser sources, tend to be wideband devices. Therefore the demultiplexer must provide the wavelength selectivity of the receiver in the WDM system.
WDM systems are split into different wavelength patterns, conventional/coarse (CWDM) and dense (DWDM). Conventional WDM systems provide up to 8 channels within the 3rd transmission window (C-Band) of silica fibers around 1550 nm. Dense wavelength division multiplexing (DWDM) uses the same transmission window but with denser channel spacing. Channel plans vary, but a typical system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing. Some technologies are capable of 12.5 GHz spacing (sometimes called ultra dense WDM). Such spacings are today only achieved by free-space optics technology. New amplification options (Raman amplification) enable the extension of the usable wavelengths towards the L-band, pretty much doubling these numbers.
Coarse wavelength division multiplexing (CWDM) in contrast to conventional WDM and DWDM uses increased channel spacing to allow less sophisticated and thus cheaper transceiver designs. To supply 8 channels on one fiber CWDM uses the whole frequency band between second and third transmission window (1310/1550 nm respectively) including both windows (minimum dispersion window and minimum attenuation window) but the critical area where OH scattering may occur, recommending using OH-free silica fibers in case the wavelengths between second and third transmission window ought to be used. Avoiding this region, the channels 47, 49, 51, 53, 55, 57, 59, 61 remain and these are the most commonly used.Each WDM Optical MUX includes its optical insertion loss and isolation measures of every branch. WDMs are available in several fiber sizes and kinds (250µm fiber, loose tube, 900µm buffer, Ø 3mm cable,simplex fiber optic cable or duplex fiber cable).
WDM, DWDM and CWDM are based on the same idea of using multiple wavelengths of sunshine on one fiber, but differ within the spacing of the wavelengths, quantity of channels, and also the capability to amplify the multiplexed signals within the optical space. EDFA provide an efficient wideband amplification for that C-band, Raman amplification adds a mechanism for amplification in the L-band. For CWDM wideband optical amplification is not available, limiting the optical spans to many tens of kilometres.
Regardless if you are WDM Optical MUX expert or it is your first experience with optical networking technologies, FiberStore products and services are equipped for simplicity of use and operation across all applications. If you want to choose some fiber optic cable to connect the WDM, you are able to make reference to our fiber optic cable specifications.Have any questions, pls contact us. | <urn:uuid:5dec2a4b-fbeb-4019-8881-de72c85ef320> | CC-MAIN-2022-40 | https://www.cables-solutions.com/tag/duplex-fiber-cable | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00502.warc.gz | en | 0.932185 | 1,199 | 3.609375 | 4 |
Sergey Nivens - Fotolia
Algorithmic bias, lack of artificial intelligence (AI) explainability and failure to seek meaningful consent for personal data collection and sharing are among the biggest barriers facing AI, according to analysis from the Centre for Data Ethics and Innovation (CDEI).
The CDEI’s AI Barometer analysis was based on workshops and scoring exercises involving 120 experts. The study assessed opportunities, risks and governance challenges associated with AI and data use across five key UK sectors.
Speaking at the launch of the report, Michael Birtwistle, AI Barometer lead at the CDEI, said: “AI and data use have some very promising opportunities, but not all are equal. Some will be harder to achieve but have high benefits, such as realising decarbonisation and understanding public health risk or automatic decision support to reduce bias.”
Birtwistle said the CDEI analysis showed that what these application areas have in common is complex data flows about people that affect them directly. “We are unlikely to achieve the biggest benefits without overcoming the barriers,” he added.
Roger Taylor, chair of the CDEI, said: “AI and data-driven technology has the potential to address the biggest societal challenges of our time, from climate change to caring for an ageing society. However, the responsible adoption of technology is stymied by several barriers, among them low data quality and governance challenges, which undermine public trust in the institutions that they depend on.
“As we have seen in the response to Covid-19, confidence that government, public bodies and private companies can be trusted to use data for our benefit is essential if we are to maximise the benefits of these technologies. Now is the time for these barriers to be addressed, with a coordinated national response, so that we can pave the way for responsible innovation.”
The report found that the use of biased algorithmic tools – due to biased training data, for example – entrenches systematic discrimination against certain groups, such as reoffending risk scoring in the criminal justice system.
Bias is systemic
During a virtual panel discussion at the launch of the AI Barometer, Areeq Chowdhury, founder of WebRoots Democracy, discussed how technology inadvertently amplifies systemic discrimination. For instance, while there is a huge public debate about the accuracy rate of facial recognition systems to identify people from black and Asian minorities, the ongoing racial tension in the US has shown that the problem is wider than the actual technology.
According to Chowdhury, such systemic discrimination builds up from a collection of policies over a period of time.
The experts who took part in the CDEI analysis raised concerns about the lack of clarity over where oversight responsibility lies. “Despite AI and data being commonly used within and across sectors, it is often unclear who has formal ‘ownership’ of regulating its effects,” said the CDEI in the report.
AI needs cross-industry data regulations
Cathryn Ross, head of the Regulatory Horizon Council, who also took part in the panel discussion, said: “A biting constraint on the take-up of technology is public trust and legitimacy. Regulations can help to build public trust to enable tech innovation.”
Mirroring her remarks, fellow panellist Annemarie Naylor, director of policy and strategy at Future Care Capital, said: “Transparency has never been so important.”
The AI Barometer also reported that the experts the CDEI spoke to were concerned about low data quality, availability and infrastructure: It said: “The use of poor quality or unrepresentative data in the training of algorithms can lead to faulty or biased systems (eg diagnostic algorithms that are ineffective in identifying diseases among minority groups).
“Equally, the concentration of market power over data, the unwillingness or inability to share data (eg due to non-interoperable systems), and the difficulty of transitioning data from legacy and non-digital systems to modern applications can all stymie innovation.”
The CDEI noted that there is often disagreement among the public about how and where AI and data-driven technology should be deployed. Innovations can pose trade-offs such as between security and privacy, and between safety and free speech, which take time to work through.
However, the lockdown has shown that people are prepared to make radical changes very quickly if there are societal benefits. This has implications for data privacy policies.
The challenge for regulators is that existing data regulations are often sector-specific. In Ross’s experience, technological innovation with AI cuts across different industry sectors. She said a fundamentally different approach that coordinated regulations was needed.
Discussing what the coronavirus has taught policy-makers and regulators about people’s attitudes to data, Ross said: “Society is prepared to take more risk for a bigger benefit, such as saving lives or reducing lockdown measures.” | <urn:uuid:bcc3846d-e1d3-42ac-a325-51c96eaf6c2d> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252484851/AI-bias-and-privacy-issues-require-more-than-clever-tech | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00502.warc.gz | en | 0.956181 | 1,029 | 2.546875 | 3 |
Big Data Collection Opportunities and Problems in Higher Education
The term “Big Data” has become ubiquitous in higher education, especially around discussions of using data to help with student success. But what exactly is big data; we have had loads of data around for a very long time. If you do an online search you will get many different definitions including:
According to Wikipedia, “big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them.”
Microsoft states, “Big data is the term increasingly used to describe the process of applying serious computing power— the latest in machine learning and artificial intelligence—to seriously massive and often highly complex sets of information.”
The National institute of Health suggests, “Big Data is more than just very large data or a large number of data sources. Big Data refers to the complexity, challenges, and new opportunities presented by the combined analysis of data.”
One thing they all have in common is that they are large, complex, growing exponentially, and unwieldly. The collection of unstructured data has increased the amount of data collected tremendously. The world creates 100 terabytes of data every day, and it is estimated that 35 zettabytes of data will be created by 2020. A zettabyte is equal to 1 trillion gigabytes or 1021 bytes.
Data is collected from many sources in addition to traditional databases like digital pictures, videos, social media posting, cell phones, web pages, emails, sensors, and many others.
In higher education, we must be careful that we are not trying to find patterns that do not exist
Amazon, Google, Starbuck are just a few examples of companies that collect large amounts of data on our everyday activity. They use it to increase sales while making it easy for us to spend our money with them. The potential of what higher education could do with large amounts of student activity data offers a compelling reason to start collecting more, even without the knowledge of how it could be used. Using data to upsell to students and determine ways to enhance success is at our finger tips. An example is mapping a student’s pattern of going to study hall, tutoring, classes, or even the cafeteria. If the student’s pattern changes it could be a sign of something wrong. A big question to ask is if collecting this data is crossing a line of privacy. Will institutions waver on the edge of paternalism?
According to Scientific America, people have what is called patternicity, we see patterns where they really do not exist. We have heard of people seeing images of Jesus in their toast or a cloud, they may see a pattern in stock market numbers. This is because of the priming effect which helps our brain and senses interpret stimuli based on expected models. Seeing patterns can be very helpful in solving problems; unfortunately, we do not have a detector in our brain that notifies us when a pattern does not really exist. In higher education, we must be careful that we are not trying to find patterns that do not exist.
Education by its nature is all about ethics. We expect students to be honest, do their own homework, and above all not plagiarize. For those in academia who do research, there are tenants that pertain to ethics including informed-consent, respecting confidentiality, and protecting individuals from harm. With this in mind we must make sure that institutions are not collecting data just because we can and it shows a pattern. We must analyze carefully if the interventions we are creating based on patterns found in our data sets are helping students and not just conforming to the expectations of society and the institution.
Higher Education institutions are no different than any other business that needs to survive. Behind student success goals, institutions conform to a system that values students getting good grades and having continued progress toward finishing a degree for the institutions to build revenue and stay in business. Without continued growth in enrollment, and students persisting to graduation, institutions of higher learning will struggle with funding. At the end of the day higher education institutions need to get their product to market, which is graduating students. Understanding why data needs to be collected, what can be determined with it, and how to protect it must be considered before we begin the process of mass collection and analysis. | <urn:uuid:2c1ac7f6-8d38-4665-b7f5-20c19d177589> | CC-MAIN-2022-40 | https://women-in-tech.cioreview.com/cioviewpoint/big-data-collection-opportunities-and-problems-in-higher-education-nid-24892-cid-266.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00702.warc.gz | en | 0.955704 | 889 | 2.921875 | 3 |
When thinking of small cells, mobile phone operators like Vodafone and EE will probably come to mind first – while this is perfectly correct, most people don’t realise that this is just one part of the picture. In fact, mmWave small cells have far more uses that extend into many other elements of our society.
These service providers are utilising small cells to create 4G and 5G wireless networks for consumers, empowering our smart phones, helping us connect to social media apps, get internet access and more.
However, one important application of small cells, which tends to go unseen, is the creation of private networks for industrial applications. Small, low power mmWave distribution nodes stand out as an emerging technology that can support the creation of wireless mesh networks in indoor and outdoor areas, which can be implemented to address the connectivity needs of large private organisations, such as industrial plants.
Why do industries need private mesh networks?
The consumer 5G networks used for mobile data and WIFI send signals back and forth from a 5G small cell to a base station (or macro cell) roughly 200-300 metres away, ultimately enabling backhaul connections on mobile phones across the country.
This is because an industrial complex or large factory could range over several kilometres, within which there will be hundreds of critical machines needing consistent monitoring using data rich technology. For example, the use of high resolution (4k) cameras is becoming commonplace, where each camera alone can generate up to 50 megabits of data – even with compression in place.
To manage such a vast amount of connections and data exchange taking place, industries therefore need to implement a private network with robust coverage and the high capacity needed to connect all of these devices to their core system. This allows them to monitor everything in real time, and record and analyse that critical data.
This kind of system requires multi-gigabit level performance, resilient connections and the capacity to hold high data rates as standard.
Why use mmWave small cells?
Fibre cabling has the capability to provide this level of performance that industrial applications require, but in certain areas it’s not always accessible to achieve this. It can involve disruptive construction, digging, and often can’t be laid at all if the network area has obstructions such as rivers or pipework. In an industrial complex these kinds of obstructions are extremely common, and so other solutions need exploring.
By replacing fibre with small distribution nodes that use wireless connections, these private mesh networks can be achieved and tackle issues of obstructions or complex environments.
Using mmWave frequencies also enables the wireless connections to perform at a fibre-grade level. This is because the greater bandwidth available at mmWave frequencies can carry more data and at faster speeds.
Formed as a mesh network, these high-capacity wireless connections can connect to all devices in an industrial complex, allowing large amounts of data to be easily and quickly transferred back to the core network without the need for fibre.
Using a mesh network is particularly beneficial for multi-device networks; its ability to counteract interference and seamlessly re-route connections makes it more robust and reliable than a standard network. The flexible connections and small distribution nodes also make mesh networks easy to scale and expand as more devices are added.
What are the benefits of private mesh networks to industrial applications?
Deployment of mmWave distribution nodes for wireless private networks can be made using existing infrastructure, such as poles or walls, eliminating any large disruptions to operations and lowering cost of deployment.
Privacy and Security
In contrast to large mobile operators, private mesh networks are a completely independent system that only the factory or industrial complex can use. The privacy of this allows enhanced security to ensure sensitive data is contained.
License Exempt Band
Private networks built with mmWave nodes utilise the license exempt (or unlicensed) frequency bands, which come with several benefits; lower costs, low latency and higher bandwidth. The license exempt band function between 57-71GHz which offers 14GHz of available spectrum, which is the largest amount of continuous spectrum between DC and 100GHz available in the industry today.
The license exempt band is free to use whereas the licensed spectrum, which is what mobile operators function in, need to be paid for through a license fee. No extra costs attached for providers translates directly to lower costs for customers.
Therefore, for industrial complexes that require high-bandwidth solutions, a large amount of spectrum, and reliable security to connect multiple devices, private networks are an exceptionally well positioned option.
WHY BLU WIRELESS
Blu Wireless are cutting edge leaders in the high-speed wireless connectivity and communications industry. Operating in the 57-71GHz band for mmWave wireless communications, we can deliver the appropriate bandwidth and spectrum needed to build an effective, flexible and reliable private network.
We work with our customers to understand their requirements and can offer an array of solutions, from mmWave technology to the final product, simplifying the creation and deployment of your customised, fit-for-purpose network. As well as the hardware, such as the radio and modem, we uniquely supply the application software which integrates the private mesh network together.
This mmWave technology is the next step for industries and Industry 4.0. Without it, the creation of private networks that can seamlessly manage data from sensors, 4K video, digital intelligence and more won’t be possible in the near future.
To find out more about our mmWave technology and how it can be used for industry private networks, get in touch. | <urn:uuid:bc358938-056b-4fa9-ad65-ce5677650184> | CC-MAIN-2022-40 | https://www.bluwireless.com/insight/mmwave-small-cells-creating-private-networks-for-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00702.warc.gz | en | 0.926115 | 1,137 | 3.046875 | 3 |
According to Acumen Research and Consulting, the global building thermal insulation market size is expected to reach around US$ 40 billion by 2026 with rising CAGR of around 4.5% throughout the forecast period 2019-2026.
The method of reducing heat transfer between neighboring environments is thermal isolation with indoor maintenance of the ambient temperature. The thermal insulation in buildings contributes to reducing the carbon footprint, as it uses less energy to maintain a temperature and restricts heat transmission to the external climate. The outside temperature is less affected by the thermal insulation of houses and buildings. Thermal insulation in buildings may be done with different materials, including fibreglass, plastic foam and mineral wool, including cotton slag, wool slag, etc. Foamed materials are polystyrene, polystyrene extruded, polyurethane and other plastic forms.
The report provides analysis of global Building Thermal Insulation market for the period 2015-2026, wherein 2019 to 2026 is the forecast period and 2018 is considered as the base year.
Growth is expected to result in increased residential and commercial use of the product to decrease overall energy costs, coupled with an increased awareness of energy conservation. Favorable regulations due to the reduction in overall energy use will probably be a key driver of the market. Heat, ventilation, and air conditioning (HVAC) dependence is reduced by thermal isolation which reduces overall consumption of energy, which is expected to benefit growth. U.S. accounts for North America's largest share. Different initiatives, such as the WAP, which focus on large-scale product adoption in households with little revenue, are expected to play a pivotal role in driving growth.
Favorable building codes in the United States and Canada, coupled with the establishment of the Leiter Services for Energy and Environmental Design (LEED) and the United States energy certification agencies. A positive impact on demands for thermal insulation for buildings is expected from the Green Building Council (USGBC). However, due to its low biodegradability and carcinogenicity, stringent regulations imposed on the use of foamed plastics by the U.S. Environmental Protection Agency (EPA) could adversely impact growth on the market.
During the forecast period the environmental impacts of insulation materials are likely to change the industry's focus on developing environmentally friendly products. Moreover, increasing plastic foam prices are expected to adversely affect industry growth and lead to alternative products development. Due to the strict regulations governing conventional products like plastic foams, recyclable isolation is becoming more popular. Greater preference of homemakers, architects and companies for green, biodegradable and recycular products will increase the risk of substitutes on the market over the forecast period because of increasing environmental awareness.
Due to its excellent thermal insulation properties and its long service life, expanded polystyrene (EPS) is likely to show the chief CAGR of 5.0% for revenues over the projected period. Furthermore, due to its non-toxic, rotary, and recyclable properties, an increasing preference for the product is expected to boost growth. Extracted polystyrene (XPS), due to its ability to reduce humidity-based damage, water resistance, and energy savings, is estimated to show significant growth over the period forecast. It is further expected to boost growth through its ability to inhibit fungal or microbial growth in isolated areas. Mineral wool insulation accounted for an estimated 12.3% market share in 2018. Due to superior product features including fire safety, heat barriers, ecological compliability and dimensional stability, it is projected to grow significantly over the predictive time period. The growing use of mineral wool is expected to lead to growth during the forecast period in thermal barrier applications. Due to the growing penetration of products in North America, other products such as aerogel, cottonwool and wool slag should register moderate growth over time. Furthermore, an increasing preference for thermal insulation construction is estimated to stimulate growth as an alternative to foamed plastic insulation.
Roof appliances accounted for over 35 percent of the market in 2018 and are forecast to experience substantial CAGR in the forecast period, as heat diffusion from through sunrays through the roof is increasing. Furthermore, the number of single housing units will increase market growth during the forecast period. In 2018 Wall application led the thermal insulation market for buildings and, due to the increasing use of the product on exterior and inland facilities, is estimated to increase by 4.9 percent in terms of revenue during the forecast period. In addition, increased product penetration is expected to boost industry growth for insulation of cavity walls. Due to the increasing demand to reduce energy costs of the HVAC application, moderate growth is projected over the projected time period. Appliances in flooring, including garage, basement, cantilever and space are found on thermal insulation. An increased product penetration is expected to fuel market growth for floor isolation in extremely cold regions.
End Use Stance
Due to high growth in single unit housing, combined with renovation and reinsulation of old houses, residential construction is estimated to be at the top level at 5.0 percent during the forecast time period. Moreover, there is projected growth in the activities of multi-family building to support market growth. Approximately 50 per cent of 2018's total revenue was in the commercial construction sector. The growth over the forecast period is expected to be complemented by increased energy efficiency of commercial and public buildings due to high energy costs resulting in increased maintenance costs.
Europe is the largest share of the market in 2018, with more than 35 %, and the main market is expected to remain by 2026. Initiatives to promote thermal isolation as a means of energy conservation are expected to help regional growth, by registration, evaluation, authorisation and limitation of chemical (REACH) and the European Commission. The boom in residential and commercial building operations in North America, combined with the application of strict green building codes, are expected to help growth by reducing energy consumption per structure. Additionally favorable government regulations are expected to boost growth over the forecast period with regard to the product for residential as well as commercial structures. The region's increasing construction activities are expected to achieve the highest level of growth due to the growing population demand in Asia Pacific. China, due to the different government efforts to improve its public infrastructure, is estimated to account for the largest share in the region. In Central and South America, political and economic instability is expected to adversely affect regional market growth. The government's efforts to improve the economic situation, however, are expected to increase growth in the near future.
Global Building Thermal Insulation Market, By Product
Global Building Thermal Insulation Market, By Application
Global Building Thermal Insulation Market, By End Use
Global Building Thermal Insulation Market, By Geography
The market research study on “Building Thermal Insulation Market - Global Industry Analysis, Market Size, Opportunities and Forecast, 2019 - 2026” offers detailed insights on global Building Thermal Insulation market segments with market dynamics and their impact. The report also covers basic technology development policies.
The report provides an analysis of the latest industry trends from 2015 to 2026 in all sub-segment segments and forecasts revenue and volume growth on the global, the regional and country levels. ARC has divided the global market report of medical oxygen concentrations on product, application, technology and region for this study.
Key Players & Strategies
Market players focus on increasing their share by means of fusions and acquisitions. In order to increase their product portfolio by developing cost effective products with improved properties in the industry participants are focused on research and development activities. Manufacturers also strive to expand their production capacity to meet the rising demand for the product. In most application segments, the market is mature and has experienced slow but stable growth. Due to the high demand for insulating materials, the market is highly price sensitive. New constructions and codes of energy efficiency implementation will probably lead to high industry rivalry. Some of the key players in the market are Armacell, Certain Teed, Johns Manville, Saint-Gobain, Dow Building Solutions, Huntsman International, Owens Corning, BASF Polyurethanes, and Kingspan Group.
The thermal insulation in buildings contributes to reducing the carbon footprint, as it uses less energy to maintain a temperature and restricts heat transmission to the external climate.
Growth is expected to result in increased residential and commercial use of the product to decrease overall energy costs, coupled with an increased awareness of energy conservation.
Due to its excellent thermal insulation properties and its long service life, expanded polystyrene (EPS) is likely to show the chief CAGR of 5.0% for revenues over the projected period.
Roof appliances accounted for over 35% of the market in 2018 and are forecast to experience substantial CAGR in the forecast period, as heat diffusion from through sunrays through the roof is increasing.
Europe is the largest share of the market in 2018, with more than 35 %, and the main market is expected to remain by 2026.
The region's increasing construction activities are expected to achieve the highest level of growth due to the growing population demand in Asia Pacific.
Some of the key players in the building thermal insulation market are Armacell, Certain Teed, Johns Manville, Saint-Gobain, Dow Building Solutions, Huntsman International, Owens Corning, BASF Polyurethanes, and Kingspan Group. | <urn:uuid:49e6a255-4fe4-4839-9618-915271457558> | CC-MAIN-2022-40 | https://www.acumenresearchandconsulting.com/building-thermal-insulation-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00702.warc.gz | en | 0.944324 | 1,937 | 2.5625 | 3 |
A driver’s license is an extremely valuable document that contains important personal information. In most states, a driver’s license displays your name, address, and date of birth, as well as physical characteristics like height, weight, and a photo. A driver’s license can be used to make fraudulent purchases, open bank accounts, and cash checks in the wrong hands.
As more services and systems migrate to digital platforms, data and information is becoming increasingly vulnerable to cyberattacks. Although most banks and organizations have cybersecurity software in place, there have been many notable data breaches in recent years. People must be proactive to prevent driver’s license identity theft from happening.
Table of Contents
- What is Driver’s License Identity Theft?
- Keep Your License Safe
- Sure Up Your Identity Online
- What to Do If Driver’s License ID Theft Occurs
- Take Cybersecurity Seriously
What is Driver’s License Identity Theft?
Driver’s license identity theft is when an unauthorized person gains access to someone’s driver’s license and manipulates the photo, information, or both, to use it as their own or to sell it to someone as a false form of identification. If the person who steals the identity incurs a driving offense or uses the ID to commit a form of fraud, the violation appears on the victim’s record.
In most cases, identity theft occurs when someone’s license is misplaced or stolen. License holders should check regularly that all their identification cards are safe and secure. If your license is missing, you should report it immediately to the police and the Secretary of State’s office.
Check out our recent article about Lifelock Reviews.
Keep Your License Safe
As with most crimes, the best way to minimize the effects of identity theft is to take preventative action. Driver’s license holders should be conscious of where their license is, ensuring that it is kept safe at all times. You must also be careful about who you share your license information with.
Avoid Using Your License as Collateral
It’s quite common for forms of identification to be requested as collateral. For example, when you rent an apartment, test drive a car, stay in a hotel, drive around a race track, or rent other forms of high-value equipment, the organization temporarily holds your ID in case any damage or misuse occurs.
Although identification documents may be needed as a form of collateral or a security deposit, avoid using your driver’s license or documents containing sensitive information. It can be beneficial to explain your reasons. By explaining your concerns about identity theft, they should respect your request.
There are few legitimate reasons for any third-party organization to need your driver’s license. Be very careful who you share this information with and avoid doing so if possible.
Utilize Safe Automated Renewal Options
License holders should make the most of modern technology by using automatic renewal options when their licenses are about to expire. This can reduce the chances of lost driver license identity theft as it minimizes the risk of human error in the renewal process. Whether it’s done by phone, email, or web application, utilize this feature where possible. Remember to destroy your old license using a cross-cut shredder once your new one arrives.
Request Your Driving Records Every Year
Sometimes, identity thieves can act subtly to stop you from noticing that your ID is being used fraudulently. A good way of spotting theft is by scanning your records for inconsistencies. In some states, it’s free to request a copy of your driving record, and in most other states, it is quite affordable. It’s good practice to perform this task annually.
Sure Up Your Identity Online
A stolen or lost driver’s license is just the tip of the iceberg when it comes to identity theft. The information a license contains can be used to gain access to even more confidential data, such as a bank account, email address, or social media account. Shoring up your online identity can prevent your data from being exploited before you even realize your driver’s license information has been stolen.
Freeze Your Credit
Freezing your credit limits access to your financial records, preventing new credit files from being opened. This process is free, and you can unfreeze your credit if you want to open an account.
Take Care of Your SSN
Your Social Security Number is one of the most valuable pieces of information regarding your personal data. Keep your SSN as private as possible and only provide it when you’re certain it is necessary. Safely store or destroy any physical paperwork that shows your SSN.
Prioritizing Password Strength
Always use strong, unique passwords that are unpredictable. Passwords should have at least eight characters, with a mix of upper and lower case, numbers, and non-alphanumeric symbols. It’s good practice to change your passwords twice per year. Use two-factor authentication for accessing important platforms, such as online banking.
Use Smart Alerts
Smart alerts from financial institutions can inform you immediately when transactions or changes are made to your account. This gives you the best opportunity to act if fraudulent behavior is detected.
Set Security Codes on Mobile
If a driver’s license is physically stolen, the thief may also target your phone. Using fingerprint authentication or security codes prevents a thief from gaining immediate access to your phone and its applications.
What to Do If Driver’s License ID Theft Occurs
Call Your Credit and Debit Card Issuers
Unfortunately, even the most reliable organizations experience data breaches. Whether it’s due to human error, sub-standard security measures, or an advanced security attack, your information may be vulnerable when it’s stored online or within company storage systems. If your data is stolen, it’s vital that you know what to do, so you can act fast and minimize the consequences.
Whether you’ve experienced lost driver license identity theft or information theft due to a data breach, the immediate steps you take are most important.
If your driver’s license is lost or stolen, you must:
- Contact the police and report when, how, and where your license was stolen or lost. This creates an official paper trail for your missing document.
- Report your missing license to the local Department of Motor Vehicles (DMV) to initiate license replacement proceedings.
- Freeze your credit lines (if you haven’t already done so).
- Monitor all important accounts for unusual activity.
- Consider changing the locks of your home, as your address is on your license.
If your license information is compromised due to a data breach, the organization responsible must contact you. It’s generally beneficial to follow their recommendations, monitor important accounts, and report this information to the DMV.
Find out more security information by checking out our Blog
Take Cybersecurity Seriously
It’s easy to ignore the dangers of a lost driver’s license. It may not seem like a real threat until it occurs. The process of freezing bank accounts, resetting social media platforms, and trying to prevent your information from being compromised following driver’s license loss or theft can be stressful and costly. From installing computer security software to frequently changing passwords, it’s never been more important to take cybersecurity seriously.
Last Updated on | <urn:uuid:eef5855d-569b-434c-8d16-be6e1dc48a60> | CC-MAIN-2022-40 | https://www.homesecurityheroes.com/drivers-license-identity-theft-prevention/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00702.warc.gz | en | 0.908005 | 1,532 | 2.796875 | 3 |
The future of work is not about humans being replaced by robots. Rather, it is about us learning to work alongside smart, automated technology that will augment our capabilities while allowing us to focus on skills that are uniquely humans.
We have been sharing our workplaces with robots for some time now – the earliest industrial robots were used in the mid-20th century, usually to carry out routine, manual assembly work on production lines. What makes today’s industrial robots different is that they are capable of carrying out work in a way that is truly autonomous, without needing direct control or input from us to tell them how to do it. This is because they are controlled by artificial intelligence (AI) – specifically software algorithms that use machine learning to enable them to continuously get better and better at their jobs.
If, as is true of many of us, your concept of what a robot is comes from science fiction, then many of today’s industrial robots may not look quite as you expect them to. This is because they are generally built to carry out one particular task, so they will often look pretty similar to whatever regular, non-AI machine is usually used for that job. The term “robot” is also sometimes used to refer to autonomous systems that are entirely built-in software, as in Robotic Process Automation (RPA).
A famous example of human and robot collaboration is Amazon's warehouse robots that work alongside staff in its fulfillment centers. These robots have one job – to bring items to human pickers so they can be packaged and labeled for dispatch. They do this by moving entire shelving units and are programmed to watch out for humans so they will not collide and cause accidents. While the existing robots are limited to working in certain designated areas, a newer model currently being trialed, nicknamed "Bert," will be able to safely navigate anywhere on the factory floor. Amazon says that since it introduced robots to its warehouses in 2012, it has also created over a million human jobs.
Robots are often used to work on farms to carry out jobs that are dangerous or just boring. Autonomous drones can be used to plant seeds, spread fertilizers and pesticides, and watch out for invasive species or trespassers. Humans will oversee their work and step in when manual decisions need to be made. US startup Burro creates “people scale” collaborative robots (or “cobots”) that use computer vision and GPS to follow agricultural workers and assist them with day-to-day work. The market for robots in agriculture is predicted to be worth $11.58 billion by 2025.
Moxie is a cobot created by Diligent Robots designed to help nurses and other staff on their rounds in hospitals. It can make deliveries and carry out a number of non-clinical tasks proactively, such as restocking supplies and collecting samples. It can do this without needing to be told precisely what to do, by integrating with electronic healthcare records. The idea is that robots like Moxie will leave human workers free to carry out the parts of their job that can be best done by humans, like providing care and compassion to the sick.
Wellbeing and Therapy Robots
Robots are increasingly used to help patients recovering from injury or surgery. The collaborative robots created by Italian startup Heaxel train those in recovery to carry out repetitive movements, monitor their progress towards recovery, and pass data back to human therapists who are able to use it to fine-tune the recovery program. Other robots have been created that are designed to live alongside the elderly or disabled. As well as providing a form of companionship, they assist caregivers by monitoring their wellbeing and watching out for accidents and falls in the home.
A robot called RoMan has been used by the US Army to clear roads of obstacles that may be providing cover for enemies, or other hazards such as improvised explosive devices. It uses 3D sensor data to determine whether objects will pose an obstacle or a hazard, a pair of mantis-like arms originally designed by NASA's Jet Propulsion Laboratory and is powered by deep learning algorithms.
So, robots have now gone full-circle and ended up back where they started, in manufacturing. But today’s collaborative manufacturing robots are significantly more advanced than they were when General Motors installed its first robots at its New Jersey plant in 1962. Symbio Robotics creates robots that are used by car manufacturers, including Ford and Toyota, not just for welding and spray painting but for installing components, picking parts, testing systems, checking for faults or defects, and screwing and bolting. These processes constitute the part of the manufacturing cycle known as “final assembly," which traditionally is the most complex and difficult to automate. This is because they require a greater degree of precise control and manual dexterity, which hasn’t been available in robotic systems until recently.
Fast-food chains have been quick to adopt automation in their drive to increase service speed and bring down operating costs. Miso Robotics has created a kitchen cobot that has been trialed by companies including Caliburger and Walmart, as well as at the Dodger Stadium. The robot, known as flippy, assists human chefs by flipping burgers and frying chicken, and unlike human chefs is capable of working for 100,000 hours without a break. | <urn:uuid:cffd8560-ee1b-49fc-9820-1fc46449f065> | CC-MAIN-2022-40 | https://bernardmarr.com/the-best-examples-of-human-and-robot-collaboration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00702.warc.gz | en | 0.968233 | 1,105 | 3.5 | 4 |
Earlier this year, hackers hijacked text messages containing two-factor authentication codes sent to German online banking customers, allowing them to siphon money from multiple bank accounts. Though undoubtedly a surprise to the users who were defrauded, this was hardly shocking to the security community. Experts have long warned of vulnerabilities in Signaling System 7 (aka SS7), a key protocol used by wireless networks.
Developed in the 1980s, this protocol has grown long in the tooth. SS7 contains multiple security holes which allow hackers to eavesdrop on calls, read or redirect SMS messages, and track a device’s location. All a criminal requires is access to a carrier’s network – from there, they’ve free rein to do whatever they please.
In this case, they hijacked a bank’s two-factor authentication system. But they could do much more.
Imagine a terrorist tracking the location of military personnel via GPS. Imagine an unscrupulous government listening in on calls made by foreign diplomats. Imagine an attacker gaining access to your business’s data center by hijacking your SMS authentication. Imagine an underhanded competitor eavesdropping to get wind of an upcoming acquisition, then going behind your back to steal the company from you.
SS7 vulnerabilities aside, SMS has its own security weaknesses. It’s not designed to be secure – it’s designed to be convenient. Using it without encryption is therefore just asking for trouble.
In short, voice calls and text messaging need to be treated with just as much care – and skepticism – as any other communication medium. So, why aren’t they?
Tackling the Challenges of Secure Voice
For SMS, I believe it’s mostly because people don’t realize how insecure it is – they’re either unaware or wilfully ignorant of its flaws. Secure voice, on the other hand? That’s a bit more challenging.
For call encryption to be useful, it cannot interfere in any way with a phone conversation. The call quality and latency must be exactly as it would be with no encryption at all. And employees must never be forced to do additional legwork to encrypt their calls – the only thing they should have to do is dial a number.
Not only that, an encrypted voice solution must integrate with a wide range of devices, systems, and carriers. If you can only encrypt calls between two people on the same mobile network, that means all your contractors, partners, and vendors must use the same mobile carrier. That’s hardly a reasonable expectation.
Lastly, there’s the challenge of installation. Many organizations might not have the budget to purchase and manage extensive hardware for secure calling. And configuring such systems to integrate with the rest of an organization’s infrastructure can be downright daunting.
Enter SecuSuite. Part of our BlackBerry Secure approach to unified mobility, it’s designed to the strictest regulatory standards, and is the preferred solution for high-security agencies around the globe. Available for both government and enterprise, it’s simple to deploy and manage, and compatible with multiple devices and operating systems. Thanks to BlackBerry’s leading NOC infrastructure, it’s also highly reliable, and your calls over Wi-Fi will deliver the same performance as calls over carrier networks.
Because SecuSUITE is software-based, it’s easy to install, manage, and use, too. Deployment costs are minimal, and users can make calls and send text messages the same way they always would, with no interruptions. It also secures cross-network communication, so there’s no need to lock your organization or its partners to a single carrier.
Protect Your Conversations with SecuSUITE
There’s an old saying – loose lips sink ships. With all the talk about the need for email security and encrypted messaging, it’s easy to forget that those aren’t the only channels you need to protect. Voice and SMS are equally as important, especially given that the latter is frequently used to communicate in regulated fields like healthcare. And thanks to outdated carrier infrastructure and the wide availability of advanced eavesdropping tools, ignoring secure voice is akin to leaving your door wide open in a bad neighborhood.
That’s why you need a tool like SecuSUITE – because at the end of the day, you never know who might be listening in.
For more information about BlackBerry’s updated software portfolio, check out our overview blog. You can also read more about BlackBerry Workspaces, BlackBerry Dynamics, the BBM Enterprise SDK, BlackBerry UEM, or our application suite. | <urn:uuid:a7fafaba-c66b-4b1f-a664-256ed9b1b170> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2017/06/keep-your-calls-and-texts-safe-from-ss7-and-other-surveillance-tactics-with-blackberry-secusuite | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00702.warc.gz | en | 0.923819 | 959 | 2.53125 | 3 |
SCADA stands for Supervisory Control and Data Acquisition. SCADA is a type of network system used to monitor and control remote equipment and processes.
A basic SCADA system is consists of a remote terminal unit (RTU) or programmable logic controller (PLC), a human-machine interface (HMI) and the communication protocol (such as SNMP, DNP3 or Modbus RTU).
The RTU device is responsible for collecting real-time data from your remote equipment or industrial processes. The RTUs will send all this information to the HMI SCADA. The HMI is usually a SCADA software that provides you with a centralized user interface that will be like a remote monitoring and control room for all your RTUs. And the communication protocol is the language that the devices speak and that will allow for the exchange of information.
The simplest possible SCADA system would be a single circuit that notifies you of one event. Imagine a fabrication machine that produces widgets. Every time the machine finishes a widget, it activates a switch. The switch turns on a light on a panel, which tells a human operator that a widget has been completed.
Obviously, a real SCADA system does more than this simple model. But the principle is the same. A full-scale SCADA system just monitors more stuff over greater distances. Let's look at what is added to our simple model to create a full-scale SCADA system:
First, the systems you need to monitor are much more complex than just one machine with one output. So a real-life SCADA system needs to monitor hundreds or thousands of sensors. Some sensors measure inputs into the system (for example, water flowing into a reservoir), and some sensors measure outputs (like valve pressure as water is released from the reservoir).
Some of those sensors measure simple events that can be detected by a straightforward on/off switch, called a discrete input (or digital input). For example, in our simple model of the widget fabricator, the switch that turns on the light would be a discrete input. In real life, discrete inputs are used to measure simple states, like whether the equipment is on or off, or tripwire alarms, like a power failure at a critical facility.
Some sensors measure more complex situations where exact measurement is important. These are analog sensors, which can detect continuous changes in a voltage or current input. Analog sensors are used to track fluid levels in tanks, voltage levels in batteries, temperature and other factors that can be measured in a continuous range of input.
For most analog factors, there is a normal range defined by a bottom and top level. For example, you may want the temperature in a server room to stay between 60 and 85 degrees Fahrenheit. If the temperature goes above or below this range, it will trigger a threshold alarm. In more advanced systems, there are four threshold alarms for analog sensors, defining Major Under, Minor Under, Minor Over, and Major Over alarms.
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SCADA tutorial.
An introduction to SCADA from your own perspective.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:bb851f94-83b1-4d88-a615-5b8eb03a4b22> | CC-MAIN-2022-40 | https://www.dpstele.com/scada/worlds-simplest-system-data-aquisition.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00702.warc.gz | en | 0.907011 | 713 | 3.96875 | 4 |
Information sharing continues to become more complex. We utilize information sharing every day to streamline operations in our organizations. We want our information to be accessible, secure, and sharable. However, some information is not for everyone’s eyes.
The solution to this complex problem is Access Controls.
Access controls secure, control, and manage information sharing with internal and external users. In a world connected to the internet, our information is more vulnerable than ever. By establishing access controls, your organization’s data is protected from unapproved users.
There are two core fundamental aspects to achieve effective Access Controls – authentication, and authorization to identify, verify, and categorize user access.
Authentication confirms user’s identity. Authentication comes in multiple forms. The most common form is password- based authentication utilizing usernames and passwords. Another popular form is two-factor authentication, requiring the user to provide more than one form of identification. Most commonly a user signs into a platform utilizing a username and password then a second form such as a code sent to a mobile device, fingerprints, or facial recognition confirms the user’s identity. The most complex authentication form is multi-factor authentication consisting of at least 3 or more authentication factors. Once a user identity is authenticated, authorization enables user access.
Authorization determines what each user can access or edit. Authorization can be organized in different ways. Each user can be given specialized permissions within the digital space. Most commonly, user groups are created to help streamline the authorization process to ensure team members have equal access.
Access Control Types
To further the effectiveness of Access Control Systems, a model is chosen based on the organization’s needs. Discretionary Access Controls (DAC) relies on the data owner or creator to determine user access. Mandatory Access Controls (MAC) consists of a non-discretionary model where user access is determined based on information clearance determined by the organization. Role Based Access Control (RBAC) is the most common model utilized today, data access is determined by what is necessary for that role. Attribute Based Access Control (AAC) where user access is determined by the relationship between different identifying attributes between the data, organization, and the user.
The world has transformed to a reliance on a remote workforce. This has driven thousands online to share data to perform daily tasks. Access Controls help to meet the needs of every organization to not only protect and manage data but to drive productivity and improve user experience.
Author: Victoria Robinson | Marketing Manager | <urn:uuid:ccc4f07b-4d44-4b83-95f2-da8838a372df> | CC-MAIN-2022-40 | https://highlighttech.com/highlight-blog/creating-and-maintaining-access-controls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00702.warc.gz | en | 0.914285 | 520 | 3.21875 | 3 |
Due to the rapid development of wireless communications, customers have increasing requirements for system capacity and spectral efficiency.
Various solutions, such as expanding the system bandwidth and increasing the modulation order have emerged. However, expanding the system bandwidth only increases system capacity without effectively increasing the spectral efficiency ,and increasing the modulation order increases the spectral efficiency only to a limited extent because the modulation order can hardly be doubled.
-Multiple-input multiple output(MIMO), in contrast , improves spectral efficiency several folds .MIMO is an extension of single-input single-output(SISO).
MIMO uses multiple antennas at the transmitter or receiver in combination with several signal processing techniques. MIMO improves radio link reliability and signal quality ,which further help sincrease system capacity, coverage, and user rate , and ultimately improve user experience.
-Massive MIMO achieves beamforming and multi-layer multi-subscriber resource multiplexing , greatly increasing system capacity and 3D coverage. | <urn:uuid:4116f3d8-66ef-4603-a31c-d44ea1bae4ba> | CC-MAIN-2022-40 | https://www.5gworldpro.com/blog/2022/04/17/evolution-from-mimo-to-massive-mimo/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00702.warc.gz | en | 0.894177 | 201 | 2.96875 | 3 |
The Importance of Encryption: Some Things Never Change
We humans have long had the desire to protect our important information and have worked to improve how we do so using ciphers and cryptography. By the 1970s we had created a key algorithm for encryption (Data Encryption Standard or DES). Later the Advanced Encryption Standard (AES) was developed and then Pretty Good Privacy (PGP) in 1991. Modern encryption algorithms include:
- Triple DES
The Encryption Debates
The topic of encryption has been enjoying some mainstream fame in the last few years as law enforcement agencies and large tech companies debate access rights to data on personal devices. It has also come up frequently when discussing compliance regulations, such as the looming General Data Protection Regulation (GDPR), and the associated data protection measures.
Both discussions approach encryption from different angles. Law enforcement entities want to be able to access encrypted data when investigating crimes or trying to prevent them. Tech companies vacillate between wanting to protect their customers’ data and wanting to cooperate for the greater good.
When discussing compliance and general data security, encryption is steadily more about preventing breaches and leaks than just meeting regulatory requirements. According to a Global Encryption Trends study conducted by the Ponemon Institute, the need to protect specific data has displaced compliance as the primary reason for implementing encryption. That said, compliance is still a major motivator.
Data Protection at All Stages
While more companies realize encryption is a vital security component, it is still not applied consistently throughout the business world. The same study revealed only 43% of respondents claim to have an encryption strategy applied consistently across their enterprise. Businesses are dropping the encryption ball in one on place or another. They might protect data in transit, but not data at rest or vice versa. Or perhaps they are not encrypting data in public cloud services.
It’s important in today’s cyber-risky world that all of a business’ data is protected at its various locations and stages. This includes emails, storage, and file transfers. Using the right secure email solution and managed file transfer platform can help ensure your data is encrypted throughout its journey. It’s best if these tools offer multiple encryption options, including SFTP, PGP, and SSL/TLS.
More than Just Encryption
Encryption is just one part of an arsenal of security measures enterprises should use to protect sensitive data from prying eyes or mishaps. Other measures include employing firewalls, authentication, virus scanners, DLP tools, monitoring, data sanitation and data wiping, among others. Security testing and auditing are also essential.
If you can find a tool that offers all of these capabilities in one, then data security will be far easier for you to manage. Better still, find a solution that helps enforce security measures by disallowing low-security options, capturing compensating controls, and generating compliance audit reports. By using a solution that facilitates security and data protection, you can efficiently and effectively reduce your risk of breaches.
The High Security Module (HSM) for Enhanced File Transfer™ (EFT™) helps organizations achieve or exceed security practices by the most rigorous standards. Transfer Data Within Compliance
Whether you’re managing data for a small to medium-sized business or you’re managing data for a multi-million dollar global enterprise, there are many proactive ways to take back control of data protection within your organization. Three Common Business Mistakes Hackers Love to Take Advantage of
The financial services company was able to enforce data security management best practices through Globalscape’s Open PGP encryption technology. Through EFT, the financial services company was also able to ensure that they continuously met regulatory compliance with the market leading PCI Data Security Standard. Over 2 Million Files Securely Transferred For Global Financial Services Company
Choosing the right combination of protocol versions, key ciphers, MACs, and key exchange algorithms for your EFT platform. EFT Best Practices -- Optimal Configuration and Encryption | <urn:uuid:34ca5761-e20e-4cd2-a94f-f5615197e778> | CC-MAIN-2022-40 | https://www.globalscape.com/blog/importance-encryption-some-things-never-change | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00702.warc.gz | en | 0.925278 | 844 | 3.15625 | 3 |
Beer is a popular beverage around the world, but here in Grand Rapids, Michigan, it’s a bit more. There’s a reason Grand Rapids is known as “Beer City, USA.”
Fermentation is the name of the biochemical process where the sugars in a liquid, typically derived from grains or other organic sources, are converted to alcohol and fizz (carbon dioxide) by yeast. It’s the key to making wine, beer and hard cider. Needless to say, with 37 breweries (in 2018) and growing, there is a lot of fermentation going on in Grand Rapids. Because of fermentation’s central role in beer production, monitoring the fermentation process is important to breweries.
We’ve found many breweries use a combination of computer systems and manual samples to do this. But could these manual samples be automated and improved? That was the question we decided to look at closely.
One of the many indicators that quality assurance teams use to monitor beer is specific gravity. Specific gravity is a measurement of a liquid’s density in comparison to water. So, if a beer has a higher specific gravity than another, it means that it has a higher mass given a constant volume. As beer ferments and converts those sugars to alcohol and gas, the specific gravity falls and the beer gets less dense, eventually approaching a point where there are very few sugars left and the fermentation process slowly stops.
Monitoring Specific Gravity in Fermentation
Specific gravity indicates the stage of the fermentation; it tells you when things are on track or when it is largely complete. It also indicates when things are going wrong, like if fermentation gets “stuck” and the yeast stops consuming the sugar. There are ways to rescue a batch and kickstart fermentation, but you need to know about them. Enter automated specific gravity measurements, a perfect application of IoT’s benefits in the brewery industry.
Manual methods of measuring specific gravity have been around for ages. The goal was to explore ways to make continuous measurements with internet-connected devices. Research uncovered some innovative ways to do this, but there were two specific approaches that we found to be most promising.
Monitoring Gas Flow Through the Airlock
First was monitoring the production of carbon dioxide in the fermentation process by monitoring gas flowing through the airlock. The idea is, if you know the quantity of liquid in a vat and can track exactly how much gas is being created in the fermentation process, it is possible to then approximate the specific gravity of a liquid. As sugar is turned into alcohol and gas, this process is quite consistent and proportional. That means a certain volume of gas equates to a certain amount of sugar being consumed, and this will equate to a certain change in the liquid’s density.
When it comes to technical solutions, this approach is appealing in terms of cost. Many microbreweries already have a flexible hose that comes off the top of their fermentation tanks to direct gas into a 5-gallon bucket of water (a sort of one-way valve, preventing contaminants from making their way back through the tube into the vat). Fitting that hose with a gas flow monitor would be quite simple and cost-efficient. This assumes that monitoring liquid flow into the fermenter or approximating the amount present would be fairly straightforward.
Taking a Pressure Reading
The second was taking a differential pressure reading, that is, taking a pressure reading in the liquid at two different vertical heights. When you know the vertical distance between the two readings and the change in pressure between the two, it’s straightforward to compute a liquid’s density and specific gravity.
This second solution is great because it can be accomplished with off-the-shelf pressure sensors and no other measurement is needed, such as the volume of liquid present. However, these sensors would need to be installed by a professional welder as most fermentation tanks are made from stainless steel, thus increasing the upfront costs of such an approach.
We didn’t yet get the chance to put either solution to the test, but it’s not too early to highlight the benefits we believe brewers would see if they could embrace the IoT revolution in fermentation monitoring. Imagine automated QA, reduced manual labor, reduction in human errors, AI-powered algorithms detecting issues long before a human could, making early predictions on completion dates and even better—more consistent beer.
Well, is your beer connected? | <urn:uuid:04953e32-9823-47af-ab17-12f7f78d0c42> | CC-MAIN-2022-40 | https://www.iotforall.com/continuous-measurement-making-better-beer-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00702.warc.gz | en | 0.950495 | 910 | 3.21875 | 3 |
Touch-less Natural User Interface
Natural User Interface (NUI) let users quickly immerse in applications and master control with minimum learning. It is critically important for AR/VR applications and ambient intelligence systems. In burgeoning applications like autonomous drone control and in-car infotainment navigation, NUI can greatly increase the usability.
One key contributor to NUI is touch-less gesture control which allows manipulating virtual objects in a way similar to physical ones. It completely removes the dependency on any mechanical devices like a keyboard or mouse.
Gesture Control Devices
In 1970s, wired gloves were invented to capture hand gestures and motions. The gloves use tactile switches, optical or resistance sensors to measure the bending of joints. Those gloves had clumsy setup, limiting the applications to research purposes.
One of the earliest commercial wired gloves for consumer market is the Power Glove (Fig. 1), released in 1989. It was used as a control device for the Nintendo game console.
Through the years, more accurate and lightweight wired gloves were developed. One advantage of a wired glove is that they require less computing power. They are useful in cases where haptic feedback is important, like industrial robot control. However, requiring the user to put on a glove is a barrier for mass market adoption.
Vision Based Gesture Recognition
Vision based gesture recognition uses a generic camera and/or range camera to capture and derive the hand gesture. It requires higher processing power compared to a wired glove. There are multiple methods for camera based gesture recognition.
Using a conventional 2D camera, simple gesture recognition can be implemented using functions provided by commercial or open source computer vision libraries, like OpenCV library (Fig. 2). The pipeline uses skin tone detection to detect hands in a constrained area. It then detects convex and defect points of hands.
The simplistic approach can do basic stuff like finger counting. However, it is not suitable for more complex Applications. The reliability is strongly affected by factors like lighting and skin tones.
Another algorithm is the appearance-based method, which directly uses one or more hand images to match with a set of gesture templates. It can deliver pretty robust gesture classifications with machine learning methodology. The method supports simple gestures like starting a program with an open palm, stopping a program with closed palm, changing pages with hand swipe, and more.
However, the kinematic parameters of hand joints are not available. It is not suitable for Applications that require a more detailed representation of hand interaction with virtual objects in 3D space.
3D cameras that can perceive depth have become much more broadly available and cheaper in recent years (Choosing a 3D Vision Camera). In 2010, Microsoft released Kinect V1 motion controller, using technology from PrimeSense. It provides strong three dimensional body and hand motion capture capabilities in real-time, freeing game players from physical input devices like keyboards and joysticks. Kinect also supports multiple users in a small room setting. It engages non-gamers of different ages to easily participate in fun games like sports and dancing.
Besides gaming, the platform supports many interesting applications too. In sports, companies like Swing Guru developed professional coaching applications for golfing and baseball. The alternative is to place motion trackers on the user’s body. They are relatively expensive and inconvenient.
While Kinect primarily focuses on capturing body pose, Leap Motion developed a short range gesture capture device using a stereo infrared camera.
Leap Motion software is able to track fine gestures of two hands at high frame rate. It enables applications like drawing and manipulating small objects in virtual space. Some PC vendors partnered with Leap Motion to provide the user NUI in desktop applications like Computer Aided Design (CAD).
As discussed in the article “Choosing a 3D Vision Camera”, there is a growing number of low cost cameras that can perceive three dimensional space. Here is a list of some examples that people are using to develop gesture recognition software:
As mobile and embedded devices are becoming more powerful, some software vendors have also developed a strong gesture recognition software stack on typical smartphones. They are suitable for Applications where hand motions are confined to a small and well defined space, like menu clicking in VR applications or interaction with an automobile’s infotainment/navigation system while driving.
A number of software vendors are also providing SDK or middleware for application developers to easily integrate gesture and pose recognition to their applications. Here is a list of examples:
Gesture Control Applications
Besides AR/VR/MR, touch-less gesture control has a broad range of applications.
Digital signages and display walls in retail continue to grow rapidly in the next few years. Rather than just rotating predefined digital contents, gesture control enables digital signages to engage with customers in their shopping process.
In combination with face technology, digital signages could effectively function as digital sales representatives and provide a bridge between online and offline experience. This will have a positive impact on sales conversion, which is particularly important for brick-and-mortar business nowadays.
Driver distractions are becoming a huge problem for traffic safety. Automobile manufacturers are coming up with more natural ways to control the infotainment system, keeping the driver’s eye on the road.
Voice control is one way but it may not be desirable in certain situations (e.g. when you have a bad sore throat one day). A touch-less hand gesture interface reduces the need for drivers to reach out to the dashboard control panel. BMW’s camera based gesture control system is one example.
Drone manufacturers like DJI are making photo-taking drones that can fly autonomously from the user’s hand and return without using remote control. Hand gestures are the viable way to guide drone operations outdoor, like summoning the drone back by waving hands (Fig. 8).
In the age of IoT, a touch-less, natural user interface is critical for everyday users to engage with the intelligent devices and environments. In designing smart buildings, carefully designed interfaces that can recognize common user gestures will greatly enhance the user experience, productivity and safety. | <urn:uuid:ea77cba2-5b77-4fd3-bacb-365eab730c27> | CC-MAIN-2022-40 | https://www.iotforall.com/natural-human-machine-interface-gesture-control | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00702.warc.gz | en | 0.912615 | 1,272 | 3.390625 | 3 |
Try Free for One Month
Unlock insights in unstructured data with computer vision and text mining. Create ML pipelines with assisted modeling.
What Is Prescriptive Analytics?Prescriptive analytics answers the question “What should/can be done?” by using machine learning, modeling, simulation, heuristics, and other methods to predict outcomes and provide decision options. Building upon descriptive and predictive analytics, prescriptive analytics not only provides forecasting and predictions about future events, but what could make them happen. Using this information, analysts can test the impact of strategic decisions to optimize their decision-making processes.
Why Is Prescriptive Analytics Important?Building on the work of descriptive and predictive analytics, prescriptive analytics can benefit a business by helping them:
- Make informed, fact-based decisions by using real-time and forecasted data
- Understand the likelihood of certain outcomes the impact of decisions on those outcomes, and use it to plan what to do and how
- Save resources and boost efficiency by allowing AI to curate and process data into actionable scenarios
- Create reproducible and scalable processes to make decisions using near-time data
- Answer the most complex business questions such as demand forecasting, risk assessment, and what-if scenarios
How Prescriptive Analytics Works
Prescriptive analytics is the final step in business analytics and leverages the outcomes of several statistical methods and the power of AI. While descriptive analytics answers, “What has happened?” and predictive analytics answers, “What could happen?” prescriptive analytics answers, “What should we do?” and “How will our decisions affect future performance?” It gives analysts and decision makers the power to positively and confidently impact future outcomes through optimization models and iterative machine learning.
Prescriptive analytics can benefit any data-driven business and is highly utilized in fields where data is constantly changing and decisions can have wide-ranging impact.
- In healthcare, prescriptive analytics can help in both administration and patient care. A pharmaceutical company can use prescriptive analytics to reduce the costs of testing by finding the best subjects for a clinical trial, while a hospital might use it to provide attention to patients who need it most by seeing who has the highest risk of re-admission.
- In transportation, an airline can automatically adjust pricing and availability based on various factors including weather, demand, and oil prices.
- In publishing, an outlet can decide what to publish and if a piece will be popular based on search and social data for similar topics.
- In human resources, online training can be adjusted in real time based on an employee’s performance on each lesson.
Getting Started With Prescriptive AnalyticsAlteryx ML platform provides automated machine learning (AutoML) and feature engineering, giving users the ability to test ML models in a fully guided user experience, without the need for coding complex models.
Alteryx ML allows users to:
- Uncover hidden relationships within your data with Automated Insight Generation
- Use algorithms like xgBoost, LightGBM, and ElasticNet to uncover features in your data that have the highest impact on model performance
- Create understandable and explainable models and dashboards that can communicate feature importance, impact analysis, and simulation exploration
- Quickly create trusted models using pre-defined feature libraries
- Integrate models into business processes through Alteryx’s end-to-end business platform | <urn:uuid:f914040c-58eb-405c-ab6f-c399fdee56ed> | CC-MAIN-2022-40 | https://www.alteryx.com/glossary/prescriptive-analytics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00102.warc.gz | en | 0.901672 | 713 | 2.875 | 3 |
The most popular concept of sustainability revolves around energy use, and while I have no issues with energy as an issue, I think in business the idea goes further — all the way to products and customers. I will leave the last couple of ideas for another time and concentrate on energy today. Actually, energy is a huge topic, and the only thing I want to focus on is the data center, not whatever you have in the garage.
The data center might sound like a funny place to start, but it is both germane to CRM and a great place to kick this off. As it turns out, the U.S. Department of Energy (DOE) has already done the heavy lifting for data centers, but for some reason few people know about it. Just for fun, see if you can guess how much of the energy that is generated for your data center goes to useful computation. While you’re thinking, let me give you some DOE data.
Stats and Specs
First off, the majority of the energy generated goes up the smoke stack. Did you know that? And we’re not talking about some close election’s 51 percent in this case. In the DOE’s presentation, a full 65 percent of the energy in the source material, let’s say coal, never leaves the building by the transmission cables — it’s just waste heat.
That leaves 35 percent to do some useful computing. Of that, 2 percent is lost on the power lines, and 33 percent makes it to your building. Naturally, you would expect the answer to be that 33 percent, but we’ve set the bar a bit higher by asking for useful computation. As luck would have it, the majority of the power that makes it to the data center is used for cooling and to run the lights. According to DOE, 15 percent or less of the energy that started out as coal does something as important as running your screensaver.
DOE has developed a set of metrics that we can all use to gauge our efficiency, and the metrics are so easy anybody can play. So in this case, for instance, Energy Efficiency = Useful Computation / Total Source Energy. A slightly more formalized version of this equation gives us the metric DciE, or Data Center Infrastructure Efficiency, which nets out as energy for IT Equipment / Total Energy for Data Center. Armed with an electric bill and some manufacturer’s specs for your IT gear, you should be able to do your own analysis very easily.
Here’s where things get interesting. DOE has identified ways to improve your data center’s performance, and while it notes that a typical DCiE is less than 0.5, the best practice results can be as high as 0.85. We’re talking real money here. In a case study presented by DOE, Lucasfilms, a movie studio that specializes in animation, was able to save US$343,000 per year after implementation costs of $429,500. That means a one-time investment of over $400,000 provides a recurring savings of over $300,000 for a payback time of 1.2 years.
Most importantly that means energy savings for Lucasfilms of over 3 million kWh. Did you know that each kilowatt generated by coal puts more than two pounds of carbon dioxide into the air?
Is this a big deal? Yes it is. The DOE Industrial Technologies Program is trying to improve the energy efficiency of U.S industry because it consumes 32 quadrillion BTU per year, almost one-third of all energy used in the nation, according to the report. A trillion is 1 followed by 12 zeros; a quadrillion has 15 zeros. This is the only number I know of in general use that’s bigger than the national debt.
I don’t have any data on the efficiency of a single data center compared to a cloud data center, but my instincts tell me that if we’re going to get to the best-practice level of energy efficiency in the data center, one of the easier ways to do it will be to consolidate data centers into ultra-efficient clouds. Clouds can use advanced technologies to cool a data center and employ slack capacity, thus leveraging the efficiency of using less gear to serve more users.
I saw some Gartner data recently showing that server deployments peaked in 2008. That could indicate the growing popularity of cloud computing, but it could also simply be another reminder of the recession and the slack capacity that frequently accompanies a slowdown. If the answer tracks the former explanation, it would be an example of the invisible hand of the market adjusting demand in the face of increasing costs. If it’s the latter, it could be an example of that same invisible hand adjusting supply by decreasing demand. Don’t you just love economics?
My big takeaways from this exercise are several:
- Saving energy is yet another benefit of cloud computing.
- There’s more work to be done assessing the costs and benefits, but DOE has taken a leadership position in offering some commonsense ideas.
- We can all save money in our data centers, and the DOE has some practical advice.
Lastly, sustainability is about making and saving money in business. If we keep this in mind, we’ll have fewer theological debates about what’s good for the planet and whether global warming is real. It will also make it easier to see how products and customers need to be seen as sustainable resources. We’ll never agree on everything but we should be able to agree on the advisability of making money.
Denis Pombriant is the managing principal of the Beagle Research Group, a CRM market research firm and consultancy. Pombriant’s research concentrates on evolving product ideas and emerging companies in the sales, marketing and call center disciplines. His research is freely distributed through a blog and Web site. He is the author of Hello, Ladies! Dispatches from the Social CRM Frontier and can be reached at email@example.com. | <urn:uuid:af0a3e5b-e33d-4669-81cf-fc6f908eaa4c> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/the-greening-of-business-sustainability-70402.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00102.warc.gz | en | 0.944387 | 1,269 | 2.65625 | 3 |
As a scientific discipline, analytics came into existence a few decades ago. It has its origins in the government sponsored research for military purposes starting in the 1940s, like many other computing technologies. By the 1950s, large corporations started seeing it as a technology that could be taken to the market. The applications of predictive analysis (PA) have found utility in many areas, notably: weather forecasting, air travel optimization and credit risk evaluation. The increasing utility and acceptance of PA as a means to understanding trends encouraged academic circles to take greater interest in it. This in turn led to more research in modeling techniques and operations research in the 60s. | <urn:uuid:1db3f89a-fc9c-49e5-b1f5-7a0094155576> | CC-MAIN-2022-40 | https://www.csscorp.com/resources/predictive-analytics-part1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00102.warc.gz | en | 0.975672 | 128 | 2.921875 | 3 |
What Is a Data Dump, and Should You Worry About Them?
Table of Contents
- By Greg Brown
- Sep 14, 2022
Some words and phrases in modern vernacular sound intimidating to the uninformed. One such small phrase is the data dump, which has a short and targeted history. When hearing the phrase, many people think their credit cards or bank information has been stolen.
A data dump is usually a large amount of data or files transferred between two systems over a connected network. For example, several thousand personnel files need to be analyzed by the corporate HR system. That data is dumped, or transferred en masse, onto the server that needs the files.
Professional data managers often use the Structured Query Language (SQL) for dumping a database. SQL data is formatted as a collection of statements, and the dump reveals a record of tables in the database. In most databases, utilities can perform the task of dumping their data onto another server. With SQL, it is the mySQLdump utility.
An Alternative to the Traditional Data Dump
Expounding on the explanation of a data dump above comes an alternative slant on the meaning. Suppose a lawyer’s office wants to hide their client’s wrongdoing from the police. Rather than sending them only small portions of evidence when requested, the firm dumps massive amounts of data on the police. The goal of this data dump is to bury the actual evidence in an endless stream of details.
The practice of hiding small extraneous pieces of data that cloak the factual information has been used for decades. It is essentially hiding a needle in a haystack of needles.
Analyzed and Utilized
The term data dump first started to appear in research journals in 1965. Since then, the term has been associated with various business operations, and data manipulation has become commonplace. Systems everywhere could now manipulate data results as if they came from the original server.
Pitfalls quickly began to emerge from the free-flowing data exchange. First and foremost, the overall accuracy of the data on the receiving end began to suffer. Secondly, with the transfer of large amounts of personal data, privacy suffered. After an onslaught of massive amounts of raw information, most people left a meeting frustrated.
A new, more effective means of conveying information is an approach that combines raw data with critical insights. These stories provide businesses with the correct and targeted amount of data and intriguing stories to understand their meaning. Analyzing a dataset is not enough; providing a voice for the data is all-important.
Data stories must focus on a critical point and let the rest of the data dump provide a backdrop that emphasizes the primary focus. Anyone loses interest if they must sift through mounds of data. On the flip side, a data story offers a clear destination.
Bad Side of Data Dumps
Not all data is created equal, and not all of it gets to its correct destination. Enter the dark web, hackers, and ransomware. In early 2020, over 20,000 email addresses and passwords were exposed from a data dump of several government-based health organizations, including the WHO.
Most of us only know that data dumping is associated with credit card numbers, DL numbers, and email addresses. From its early beginnings, data dumps were only thought of as a hacker’s way of getting at personal information to harm everyone. Businesses’ complete lack of security has dominated tech forums and chat boards worldwide.
A few years ago, the public only heard of the millions of personal and business records being stolen regularly. Month after month, no good news came out of the tech sector. Finally, IT admins made the data more secure and removed it from the headlines.
Is terrible data dumping still around? Probably so, but the people who manage the database may not be.
Singular Data Dumps Still Haunt Our Memories
No matter the efforts to try and cover up or delete lousy press, the Yahoo data breach remains the largest in history. According to statistics and a few years past, over three billion user accounts and their associated information were stolen by hackers and ransomware cybercriminals.
It took Yahoo months of research to determine just how significant the breach was. However, the flood of bad headlines was splashed worldwide, and the damage was done. In 2014 another massive data dump from Yahoo gave the dark web 500 million accounts.
These extreme cases highlight the capabilities of hackers who want to steal data quickly by dumping it onto another server system. Statistics from Data Breach are astonishing, such as 68 records are stolen every second, and it costs each user $150 per incident. The list of data breach information goes on forever and affects nearly every person globally.
Data dumping became so problematic for companies that new protocols, security measures, and tighter admin control became necessary. Hackers found it easy to breach large databases mainly because of the deficient and insufficient data from system to system. Businesses have begun to improve. Today, their data is conveyed like a story rather than a line item, and the data sheets are encrypted and secured.
The appetite for more information has become insatiable, and with it, the need for large databases of small meaningful details. The effort to store more data was the need of every organization. However, the problem that arose was the lack of IT professionals to manage the systems and the data itself. Over time, databases became so corrupt that they were unstable. Hackers no longer needed data dumping.
How bad is the data? According to a research study in 2016 by IBM, bad and corrupt data costs America nearly $3.1 trillion yearly.
Should We, as Consumers, Worry About Data Dumps?
Everyone online or even using a smartphone will deal with data dumps on some scale. For example, we want to find historical financial information on a company of interest so we can invest. You request three years of balance sheet information to determine if the dollars make sense.
The total cash and investments line says the company has plenty of short-term cash to weather an upcoming problem. However, instead of checking the information before it was posted. The IT Admin said, “who cares” and posted the wrong years and cash information without double checking.
The “who cares” era is upon us, so be careful! Data dumps could end up costing you far more than you think. All it takes is a single breach and your data could fall into the wrong hands. | <urn:uuid:b04b10b1-77c9-45e0-b3ae-a04661599cff> | CC-MAIN-2022-40 | https://www.idstrong.com/sentinel/what-is-a-data-dump/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00102.warc.gz | en | 0.950978 | 1,323 | 2.765625 | 3 |
My blog earlier in the week touched on standardizing security legislation for Internet of Things (IoT) devices, leading to a few conversations on what this could potentially look like. The massive Distributed Denial of Service (DDoS) attack on October 21, 2016 has raised some questions regarding the security risks these devices represent and what can potentially be done to stop the threat in the future.
Parameters for IoT Security Legislation
IoT devices themselves are not subject to traditional laws for recalls if the security posture cannot be mitigated. In other words, you cannot recall a DVR used in a cyber-attack like a defective cell phone or faulty toaster just because you cannot change the default password. With these characteristics in mind, the following parameters should be a part of any potential future cyber security legislation that governs IoT devices:
- Internet connected devices should not ship with common default passwords
- Default administrative passwords for each device should be randomized and unique per device
- Changing of the default password is required before the device can be activated
- The default password can only be restored by physically accessing the device
- The devices cannot have any administrative backdoors or hidden accounts and passwords
- The firmware (or the operating system) of the device must allow for updates
- Critical security vulnerabilities identified on the device for at least three years after last date of manufacturer must be patched within 90 days of public disclosure
- Devices that represent a security risk that are not mitigated or fail to meet the requirements above can be subject to a recall
While I fully expect the list to evolve, and that some of peers may object to entries or have some of their own (please send your ideas if you do), we cannot continue to allow unsecure devices to be connected to the Internet. This could jeopardize the infrastructure that we have all become so dependent upon.
While we must continue to invest in defensive technologies to stop threats already deployed, and threats from devices deployed abroad, if we do not act soon, the rapid adoption of insecure IoT devices could leave us with more potential attack vectors than actual legitimate devices. A simple combination of password management, vulnerability management and asset management can go a long way in starting this process.
Is it Time for the Government to Get Involved?
If you consider the potential address space of IPv6, and the potential adoption of devices from light bulbs to cameras that can be connected to the Internet, we need to adopt basic safe computing for all devices in order mitigate potential botnet threats from IoT devices like we experienced last week. The shear thought of our Internet infrastructure being disrupted by insecure devices being sold en mass from a foreign nation just raises more questions than and answers, and no current trade or legal methods to stop them.
After all, it has now been proven these commercial devices can be weaponized with malware and target the largest companies in the United States and cause millions in financial losses. It is time for our government to step in and mandate the basics.
What are your thoughts? Let’s keep this important conversation going.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook. | <urn:uuid:e53ceeeb-693d-456a-bfe9-21a368b8be71> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/iot-security-legislation-recommended-parameters | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00102.warc.gz | en | 0.939845 | 834 | 2.5625 | 3 |
New Photonic Chip for Isolating Light May Be Key to Miniaturizing Quantum Technology
(Phys.org) Light plays a critical role in enabling 21st century quantum information applications. For example, scientists use laser light to precisely control atoms, turning them into ultra-sensitive measures of time, acceleration, and even gravity. Currently, such early quantum technology is limited by size — state-of-the-art systems would not fit on a dining room table, let alone a chip. For practical use, scientists and engineers need to miniaturize quantum devices, which requires re-thinking certain components for harnessing light.
Now IQUIST member Gaurav Bahl and his research group have designed a simple, compact photonic circuit that uses sound waves to rein in light. The new study, published the journal Nature Photonics, demonstrates a powerful way to isolate, or control the directionality of light. The team’s measurements show that their approach to isolation currently outperforms all previous on-chip alternatives and is optimized for compatibility with atom-based sensors.
“Atoms are the perfect references anywhere in nature and provide a basis for many quantum applications,” said Bahl, a professor in Mechanical Science and Engineering (MechSe) at the University of Illinois at Urbana-Champaign. “The lasers that we use to control atoms need isolators that block undesirable reflections. But so far the isolators that work well in large-scale experiments have proved tough to miniaturize.”
Bahl’s team demonstrated a new non-magnetic isolator that turns out to be simple in design, uses common optical materials, and is easily adaptable for different wavelengths of light.
“We wanted to design a device that naturally avoids loss, and the best way to do that is to have light propagate through nothing. The simplest bit of ‘nothing’ that can still guide photons along a controlled path is a waveguide, which is a very basic component in photonic circuits,” said Bahl.
That is only the first half of the design because for isolation, the light must be simultaneously blocked in the opposite direction.
The team’s measurements revealed that nearly every photon moves through the waveguide in the forward direction, while having only one-in-ten-thousand chance of making it through backwards. This means that the design reduced losses, or undesirable light absorption, to nearly zero, which has been a long-standing problem with previous on-chip isolators. The data show that the new devices exhibit record-breaking performance for on-chip isolation and operate as well as the larger magnet-based devices. In addition, the approach is flexible and can used for multiple wavelengths without changing the starting material. | <urn:uuid:996f4bca-267a-4120-9fd3-1ec59218aea1> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/new-photonic-chip-for-isolating-light-may-be-key-to-miniaturizing-quantum-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00102.warc.gz | en | 0.939836 | 566 | 3.578125 | 4 |
Difference : Web Application Firewall (WAF) vs Network Firewall
While deliberating on type of security to be employed for Web-facing applications or e-commerce servers, designers and administrators may find this challenging whether Network firewall or Web application Firewall addresses the security requirement of such deployment.
While one school of thought may argue that perimeter security (provided by Network Firewalls) is the essential item secured traffic flow, others may support Web Application firewall considering its ability to provide security from Layer 7 attacks.
Related – Firewall Security Level
So, lets 1st understand the basics of WAF (Web Application Firewall) and Network Firewall.
WAF or Web Application Firewall –
A Web Application Firewall is a network security firewall solution that protects web applications from HTTP/S and web application-based security vulnerabilities.
Some of the most common types of attacks which are targeted at web servers (Web Applications) include –
- SQL injection attacks
- cross-site scripting (XSS) attacks
- DDoS attacks.
Know more about WAF
Network Firewall –
Network Firewall is a device which controls access to a secured LAN network to protect it from unauthorized access.
A firewall acts as a filter which blocks incoming non-legitimate traffic from entering the LAN network and cause attacks.
The main purpose of a firewall is to separate a secured area (Higher security Zone / Inside Network) from a less secure area (Low-security Zone / Outside Network etc.) and to control communications between the two.
A firewall also controls inbound and outbound communications across devices.
Now that we have clarity of fundamentals of WAF and Network Firewall, below table references on how both technologies differ from each other
WAF vs Network Firewall –
Detailed comparison table of web application firewall vs firewall –
|Philosophy||A Web Application Firewall (WAF) is a network security firewall solution that protects web applications from HTTP/S and web application-based security vulnerabilities.||Network Firewall is a device which controls access to secured LAN network to protect it from unauthorized access. Firewall acts as a filter which blocks incoming non-legitimate traffic from entering the LAN network and cause attacks.
|OSI Layer coverage||Layer 7||Layer 3 - 4
|Modes of operation ||* Active Inspection|
* Passive mode
|* Transparent mode
* Routed mode
|DDOS Protection||Application Layer||Basic level only at Network Layer
|Target objects protection||Protects HTTP/HTTPs based servers and Applications placed in Internet facing Zones of Network Firewall||Protection of user and organizational IT assets including applications, Servers and management.
|Placement in Network||Close to Web/Internet Facing Applications||On Perimeter of Network (Commonly Internet)
|Web Application protection||All-encompassing, including complete coverage of application layer ||Minimal
|Access Control||Not possible||Possible
|Algorithms||* Signature based|
* Anomaly detection
|* Packet filtering
* Stateful/stateless inspection
|Related attacks protection||* SQL injection attacks|
* Cross-site scripting (XSS) attacks
* DDoS attacks.
|* Attack from less secured zones.
* Unauthorised users accessing private networks | <urn:uuid:2724b2b1-0022-4046-b04f-2d1a5dd2a8e3> | CC-MAIN-2022-40 | https://ipwithease.com/web-application-firewall-vs-network-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00102.warc.gz | en | 0.839703 | 730 | 3.015625 | 3 |
From the broadest perspective, zero-trust principles can be applied to the entire application development lifecycle, including design of the system, hardware platforms used, and procurement procedures.2 However, this paper discusses the operational aspects of implementing zero trust for defending applications and data in runtime.
Broadly speaking, zero trust security uses technologies to achieve one of three distinct goals:
The following graphic depicts this overall zero trust security transactional model, with the following sections diving deeper into each class of technologies.
The first two technologies—authentication and access control—are closely related and are directly motivated by the principles of “explicitly verify” and “least privilege,” since these technologies are at the core of enforcing “Who can do What.” More sophisticated implementations of authentication watch the ongoing behavior of an actor, capturing the mindset of “continuously assess.”
Authentication technologies are all about building confidence in an attested identity: Who is acting in a transaction. The authentication process has three components:
The most basic form of attestation is often referred to as a “user”—a human, or agent acting on behalf of a human, that wishes to perform a transaction. However, in the case of zero trust used within an application an actor might be a workload (such as a process, service, or container), so the generalized concept of identity should include such actors. In other cases, the notion of Who includes not just the human or workload, but additional considerations or dimensions of identity. From that perspective, additional dimensions of identity might include the device or platform of the user/workload, or the ecosystem being used for the interaction or the location of the agent. For example, a user “Alice” may be on a PC tagged as “ABC- 0001” using a specific, fingerprinted browser instance, sourced from IPv4 address 10.11.12.13.
Some systems allow unauthenticated users, sometimes referred to as “guests” or “anonymous” users, to perform a limited set of transactions. For such systems, the additional steps of proving identity and the system rendering a verdict is not relevant. However, for any specific attested identity, the following methods are commonly used to support that attestation:
Often, if a high degree of confidence is required, multiple methods are used. This is evidenced in the Google BeyondCorp model3, which requires multi-factor authentication (MFA) before allowing higher value transactions. The more sophisticated authentication solutions associate a “confidence” with each identity and specify a minimum confidence level for each type of transaction, based on the value and risk of the transaction.
Finally, note that some of these methods are not static, one-shot actions but can and should be ongoing as per the principle of “continuously assess.” In such cases, the confidence score assigned to the identity attestation can change up or down over time. For example, the browser fingerprint or IP address may change within a single user session, which could be viewed as suspicious, reducing confidence; or as more data is collected on the actor’s behavior in a session, the confidence score may either increase or decrease depending on how the current behavior compares to past observations.
Dynamic authentication can work hand in hand with access control in more advanced systems. As the first level of this interaction, the access control policy can specify a minimum confidence score for different classes of transactions, as mentioned earlier. The next level of the interaction allows the access control subsystem to provide feedback to the authentication subsystem, typically asking for additional authentication to increase the confidence score to the minimum threshold.
After using authentication techniques to ascertain Who is acting in a transaction, the next questions are: What is that actor allowed to do? And to Whom? This is the purview of access control technologies.
To take a physical security analogy, imagine you wanted to visit a military base. After the guards confidently determine whether you are a civilian, politician, or soldier, they would use that determination to decide which buildings you could enter and whether you could bring a camera into each building that you might be allowed to enter. The policy governing those choices might be very coarse and apply to all buildings (for example, “politicians can enter any building”) or might be more fine-grained (such as “politicians can only enter building <A> and <B> but can only bring cameras into <A>”).
Applied to the cybersecurity context, access control techniques should embody the zero trust principle of “least privilege.” In other words, the optimal access control policy would only allow exactly those privileges that the actor requires and disallow all other privileges. Additionally, an ideal robust policy would be conditional on a specific minimum level of confidence in the authenticity of the actor’s identity, with the confidence threshold specified at the granularity of each allowed privilege.
Therefore, value of an access control solution can be judged by how closely it aligns to these ideals. Specifically, a zero trust security solution must include access control and should evaluate the access control technology along the dimensions depicted below and described thereafter.
Noting the principle of “continuously assess (and reassess),” any belief in the authenticity of the actor should adjust over time. In a simple solution it may simply be a timeout; in more sophisticated systems the confidence could vary based on observations of the actor’s behavior over time.
If authentication and access control are implementations of the “always verify” and “least privilege” mindset, then visibility and contextual analysis are foundational to the “continuously assess” and “assume breach” principles.
Visibility is the necessary precursor to analysis—a system cannot mitigate what it cannot see. Thus, the efficacy of the zero trust security solution will be directly proportional to the depth and breadth of telemetry that can be gathered from system operations and outside context. However, a modern visibility infrastructure will be capable of providing much more potentially useful data, metadata, and context than any reasonable unassisted human will be able to deal with in a timely manner. As a result of desires for both more data and the ability to distill that data into insights more quicky, a key requirement is machine assistance for the human operators.
This assistance is typically implemented using automated algorithms that span the spectrum from rule-based analysis to statistical methods to advanced machine learning algorithms. These algorithms are responsible for translating the fire hose of raw data into consumable and operationalized situational awareness that can be used by the human operators to assess and, if necessary, to remediate. For this reason, ML-assisted analysis goes hand in hand with visibility.
The generalized pipeline from raw data (visibility) to action (remediation) is shown below:
Visibility is the implementation—the “how”—of the “continuously assess” zero trust principle. It includes keeping an inventory of available data inputs (Catalog) and real-time telemetry plus historical data retention (Collect).
The maturity of a zero trust visibility implementation should consider four factors:
The latency provides a lower bound to how quickly a potential threat can be responded to. A zero trust solution’s latency should be measured in seconds or less; otherwise, it is quite likely any analysis—no matter how accurate—will be too late to prevent the impact of the exploit, such as data exfiltration/encryption or unavailability due to resource exhaustion. More sophisticated systems may allow both synchronous and asynchronous mitigations. Synchronous mitigation would inhibit completion of the transaction until full visibility and analysis are completed. Because synchronous mitigation is likely to add latency to the transaction, this mode of operation would be reserved for particularly anomalous or risky transactions, while allowing all other transactions to send telemetry and be analyzed asynchronously.
This concern is relevant if data arrives from multiple sources or types of data sensors, which is a common scenario. This factor typically breaks down into two sub-concerns.
One key value derived from a high-quality visibility solution is the ability to discover suspicious activities as an indicator of possible breach. To do so effectively the solution must receive telemetry across all the relevant “layers” of application delivery: the application itself, of course, but also the application infrastructure, the network infrastructure, any services applied to or used by the application, and even the events on the client device. For example, identifying a user coming in from a new device, never seen before, may be slightly suspicious on its own; but when combined with network information (such as GeoIP mapping from a foreign country) the suspicion level goes up higher. This suspicion level is manifested as a lower confidence score in the identity of the user. In the context of a zero trust security policy, when this actor attempts a high-value transaction (such as transfer of funds to a foreign account), the access control solution can choose to block the transaction, based on the low confidence.
As it relates to zero trust mindset, the deeper and more complete the visibility solution is, the more effective the system can be in appropriately limiting transactions and detecting breaches
Finally, any collection of data must be compliant with statutory and licensing requirements relating to the security, retention, and use of data. Therefore, a robust visibility solution must address each of these needs. Understanding the constraints on data use implied by governance must be factored into a zero trust visibility solution. For example, if an IP is considered Personally Identifiable Information (PII), then the use and long-term retention of IP addresses for analysis must cater to permissible use of the IP addresses.
In addition to visibility, the other machinery required to implement “continuously assess” is the analytical tooling required to perform meaningful assessment; that is, to have assessment that can be operationalized by a zero trust solution.
One consideration for analysis is the scope and breadth of the input data. The inputs to the analysis algorithms can be limited to a single stream of data from a single source, or can look across multiple streams, including from various data sources and all layers of the infrastructure and application.
A second particularly relevant aspect of analysis in the zero trust framework is dealing with the volume and rate of data ingested, which will exceed the capability of any human to digest. Therefore, some sort of machine assistance to form human digestible insights is required. Once again, the sophistication of the assist can be described as a progression.
As with the rules-based approach, ML assistance can be for detection only or it can be tied to automatic remediation. Additionally, ML assistance can be used in conjunction with a rules- based system, where the ML “verdict” (or opinion or confidence) can be used as an input into a rule, such as “do action <X> if <ML evaluator [bot_detector_A] reports bot with confidence greater than 90%> .”
The final tenet of the zero rust mindset is to “assume breach.” To be clear and provide perspective, properly implemented authentication and access control methods are effective at preventing the overwhelming majority of malicious transactions. However, one should, out of an abundance of paranoia, assume that the enforcement mechanisms of authentication and access control will be defeated by some sufficiently motivated or lucky adversary. Detection of breaches, necessary for responding to these escapes in a timely manner, requires visibility and machine assisted analysis. Therefore, it is because the other enforcement mechanisms will be defeated on occasion that the technologies of visibility feeding ML-assisted contextual analysis are a critical need to feed the zero trust security backstop solution of risk-based remediation.
For the “false negative” cases where an actual malicious transaction did defeat authentication and access control, the mechanism of automated risk-based remediation should be used as a backstop. But because this technology is applied as a backstop against transactions that passed the prior enforcement checks, there is a higher concern around incorrectly flagging what was, in truth, a “true negative” (a valid, desirable transaction) into a “false positive” (incorrectly flagged as malicious transaction). To mitigate this concern, any remediation actions triggered by a belief in possible maliciousness, that somehow was not caught by authentication or access control, should be based on the following three factors4:
Zero trust security is a more modern take on prior approaches to security such as defense in depth, extending the prior art by taking a transaction-centric view on security—Who is attempting to do What to Whom. This approach enables securing not only external access to an application but is applicable to protecting the application internals as well.5 Given this foundational transactional view, zero trust security is rooted in a set of core principles that are used to defend applications within today’s more complex and challenging environment, with the principles then mapped to a set of subsystem-level solutions, or methods, that embody those principles. The core principles and how they map to solution methods are summarized below.
These tools—the methods of authentication, access control, visibility, contextual analysis, and risk-aware remediation—are necessary and sufficient to prevent a wide variety of attack types.
2Zero trust can, and should, be applied even “to the left” of the CI/CD pipeline. Tools such as vulnerability assessment tools, static analysis, CVE databases, open-source code reputation databases, and supply chain integrity monitoring systems are consistent with the zero-trust mindset.
4Note that the line between contextual, risk-aware access control and the general topic of risk-aware remediation is a fuzzy one, and some overlap does exist.
5Often referred to as “East-West” intra-app protection, as opposed to “North-South” to-the-app protection. | <urn:uuid:9be938a8-3185-4881-aa91-a55f6cc27b33> | CC-MAIN-2022-40 | https://www.f5.com/es_es/services/resources/reports/office-of-the-cto-techniques-and-technologies-for-zero-trust-adoption | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00102.warc.gz | en | 0.922145 | 2,862 | 2.640625 | 3 |
Last month, a Russian tech giant Yandex was hit by the largest DDoS attack in history. The record-breaking attack was likely just a test drive.
The distributed-denial-of-service (DDoS) attack against Yandex that was carried out from August to September clocked in at a humongous 22 million requests per second (RPS).
"It's just as scary to think that tens of thousands of devices are fairly easily commandeered and can wreak this havoc with a very simple tactic of DDOS. Many of those hosts are in critical areas, no doubt, so other forms of attack are even scarier."-Stel Valavanis
What is Mēris?
The botnet behind the attacks was dubbed Mēris, which means 'plague' in Latvian. The name might have originated because the attack against Yandex employed mainly MikroTik network devices manufactured in Latvia.
The name also reminds of the infamous Mirai ('future' in Japanese) botnet. First discovered in 2016, Mirai used malware that infected Linux-operated devices, then self-propagating via open Telnet ports to infect other machines.
Mēris, however, is potentially more potent and more dangerous than its well-known predecessor. For example, previous botnets were made of IoT devices such as IP cameras, with relatively limited processing power and networking capabilities.
Meanwhile, the Mēris botnet is made up of professional networking equipment. The make-up of the botnet means that perpetrators behind the botnet have access to a lot more processing power and high-speed ethernet, allowing for one record-breaking attack after another.
According to MikroTik blog entry, in the recent attack against Yandex, the botnet abuses a patched vulnerability (CVE-2018-14847) that affected RouterOS, an operating system used by MikroTik devices.
A blog entry by MiktoTik claims that 'the attacker is reconfiguring RouterOS devices for remote access, using commands and features of RouterOS itself.' The worst part is that patching up now won't undo the damage as a password change and firewall update are also necessary to secure a device.
MikroTik also noted that a specific type of malware aims to reconfigure their devices from Windows computers from inside the network. The malware explicitly targets the aforementioned CVE-2018-14847 vulnerability.
So far, the patched vulnerability is the only confirmed way the botnet could infect new devices. However, it's not yet possible to rule an unknown zero-day vulnerability or brute force password attacks that allow the botnet to spread.
"This gives the attackers a much more diverse group of victims to target for DDoS extortion campaigns. Specifically, they can target larger organizations and demand significantly more in their extortion efforts."-Andrew Shoemaker
How big is it?
Our researchers estimate around 250,000 devices in the botnet, with another 40,000 devices still exposed to abuse via the CVE-2018-14847 vulnerability. It appears that as for now, the devices are uninfected.
With a quarter of a million devices, the maximum capacity of the botnet stands at 110 million requests per second. This means that the largest DDoS attack in history demonstrated only 20% of the Mēris botnet capabilities.
Worryingly, that implies that previous attacks were merely equipment testing events, not meant to take down their targets. CyberNews researchers note that the attackers constantly rotated devices employed in the assault. Moreover, the attacks themselves were usually short and terminated on attackers' initiative.
Interestingly, compared to older botnets, Mēris uses a novel way to abuse the network stack to carry out DDoS attacks. Whereas previously attackers would abuse the Network Layer, Mēris botnet takes on the Application Layer.
Even though this tactic makes it virtually impossible to DDoS a target using a spoofed IP, it also makes it extremely hard to mitigate since the requests are indistinguishable from ones a legitimate source would make.
To confuse the target's defenses even further, the domains used for the botnet have an HTTPS-proxy service running. These domains likely function as proxies for the real C2 servers used by the attackers. However, the HTTPS-proxy service further complicates the target's ability to recognize whether the request comes from a legitimate source or a botnet.
Crypto mining gone wild?
As for the botnet's origins, there's no definitive answer so far. However, after investigating the domains used for the Mēris botnet, CyberNews researchers found that the same domains were used to run the U6 botnet a couple of years ago.
The U6 was also targeting MikroTik devices, just for a different purpose – crypto mining.
Although it's impossible to know for sure, there's a chance that the operators of the U6 botnet either decided to change the course of their activities or sold the botnet for operators with different goals in mind.
That might explain why Mēris only used a fraction of its potential and why the recent attacks seem to resemble test-drive and not a full-fledged offensive. It's also possible that while most of the botnet is still used to mine cryptocurrencies, parts of it were tested for DDoS attacks.
"Despite lots of awareness and cleanup, there still are lots of multipliers out there that can be exploited and of course lots of pwnable machines."-Stel Valavanis
Either way, the rise of the Mēris spells trouble for an already intense period. Powerful, high-bandwidth devices can overwhelm major networks if they do not have advanced DDoS mitigation in place.
According to Stel Valavanis, founder and CEO of onShore Security, a cybersecurity company, we're all far from done on the mitigation front despite combined efforts to alleviate DDoS attacks.
"Despite lots of awareness and cleanup, there still are lots of multipliers out there that can be exploited and of course lots of pwnable machines, " Valavanis told CyberNews.
The newly developed capabilities of the Mēris botnet open many opportunities for threat actors to abuse their recently found power. At the current scale, the botnet is powerful enough to imitate large attacks intended to serve only as a distraction from the real goals of the perpetrators.
"It's just as scary to think that tens of thousands of devices are fairly easily commandeered and can wreak this havoc with a very simple tactic of DDOS. Many of those hosts are in critical areas, no doubt, so other forms of attack are even scarier, " Valavanis told CyberNews.
Size matters with DDoS attacks, which means the Mēris botnet can penetrate massive networks, such as internet service providers (ISPs).
Andrew Shoemaker, the founder, and CEO of NimbusDDoS, a DDoS attack simulation platform, says the size of new botnets makes absorbing such attacks particularly challenging.
"This gives the attackers a much more diverse group of victims to target for DDoS extortion campaigns. Specifically, they can target larger organizations and demand significantly more in their extortion efforts," Shoemaker told CyberNews.
A DDoS caused internet outages in New Zealand last month when the country's third-largest internet service provider was hit. The attack cut off around 15% of the country's broadband customers from the internet at one point.
Recent reports show that 2021 will be yet another record year for the number of DDoS attacks carried out. Threat actors launched approximately 2.9 million DDoS attacks in the first quarter of 2021, a 31% increase from the same time in 2020.
During DDoS attacks, vast numbers of "bots" attack target computers. Hence, many entities are attacking a target, which explains the "distributed" part. The bots are infected computers spread across multiple locations. There isn't a single host. You may be hosting a bot right now and not even know it.
When DDoS attackers direct their bots against a specific target, it has some pretty unpleasant effects. Most importantly, a DDoS attack aims to trigger a "denial of service" response for people using the target system. This takes the target network offline.
If you've repeatedly struggled to access a retail website, you may well have encountered a denial of service. And it can take hours or days to recover from.
More from CyberNews
Subscribe to our newsletter | <urn:uuid:6e742f1e-6c0d-4ce8-8768-301d6d3c1702> | CC-MAIN-2022-40 | https://cybernews.com/security/weve-seen-just-the-tip-of-the-meris-botnet-iceberg/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00102.warc.gz | en | 0.948916 | 1,766 | 2.703125 | 3 |
Thales is encouraging the use of digital identities to raise international inclusion. The company noted that there are still around one billion people who do not have any form of identification, and that many of those people are located in South Asia and sub-Saharan Africa.
The problem, according to Thales, is that people without an ID tend to have more trouble gaining access to essential services like healthcare and education, and face more barriers in their everyday lives. For example, proof of identity is required to apply for a loan, sign up for a phone plan, or to register a new business. As a result, those without an ID are pushed further to the margins of society because they do not have legitimate ways to participate.
Undocumented individuals (and women and children in particular) are also more vulnerable to slavery, trafficking, and other forms of abuse. The official system does not know that they exist, and therefore does not allocate sufficient resources to protect them.
Thankfully, digital identity offers a potential solution to the problem, since it gives people a way to prove their identity without forcing them to carry a physical document that can be lost or stolen. In that regard, digital identities are more cost effective than their paper counterparts because they can be distributed and stored electronically.
However, Thales stressed that digital transformation is still a monumental task, especially since different countries are facing different challenges with regards to identification. Organizations like the World Bank are now supporting digital identity projects all over the world, and are doing so in accordance with Principles on Identification for Sustainable Development. Thales nevertheless argues that those organizations should work with the private sector, which can help build the infrastructure needed for a large-scale digital identity system.
“This will take years to build, and it is required to adapt implementation to the context and capacities of different groups,” said Thales Identity Systems Specialist Jaume Dubois. “Proactive registration is essential, and working with trusted partners in developing digital identities that leave nobody behind.”
The GSMA recently established an innovation fund to promote internet usage in Africa and Asia. MarketsandMarkets, meanwhile, has predicted that the market for digital identity solutions will reach $30.5 billion by 2024. For its part, Thales launched its own automated Identity Verification Suite in October of last year. | <urn:uuid:7067d9db-c506-417d-9dd4-19fb38aa32c6> | CC-MAIN-2022-40 | https://mobileidworld.com/thales-looks-raise-inclusion-digital-ids-020405/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00102.warc.gz | en | 0.94862 | 468 | 2.609375 | 3 |
Lives today are fast, over-scheduled, busy and over time it can become a drain no matter how organized you may think you are. One of the most underrated activities that can help you focus and bring context to everything is personal journaling. Journaling is not new but how you journal is important. A personal journal is not note-taking, checklists for tasks for the day or meant to be structured. There are productivity activities and tools to use in your productive life.
Journaling helps bring those out of the corners of your mind, processed and written out. There have been many studies and reports showing the benefits of writing things down. It could also become a new creative hobby, maybe you will unlock your writing potential. As a pupil and student sometimes we lose interest in writing because of criticism from teachers. Maybe it is a point to buy a cheap essay one time and using an example to get your grammar and style better. Here are some tips for journal writing: https://ibuyessay.com/cheap.html
A personal journal is private, never to be seen by anyone, a dump of raw thoughts and emotions around the aspects of your life. Positive, neutral, negative, planning, solving obscure problems, and all the other things you may talk to yourself or the little voice in your head whispers. Journaling helps bring those out of the corners of your mind, processed and written out. There have been many studies and reports showing the benefits of writing things down.
- Clarify your thoughts and feelings
- Get to know yourself better
- Reduces stress by seeing your thoughts on the screen or paper in words
- Solve problems more effectively
- Improve memory
- Feel calmer
- Be more creative
- Track patterns
- See trends toward life goals
- Let go of thoughts and ideas
- and more…
How do you get started?
You just do. There is no magic formula to what you should write but there are aspects of journaling you should consider to make is worthwhile for you.
- Write every day for at least 5-10 minutes.
- Try to write twice a day, once as you start the day and once at the end and reflect.
- Write in a private, quiet area free from distractions.
- Write for you, no one else needs to see what you write.
Journaling has a negative connotation that it’s only for people dealing with negative issues in their life. This is not true. Journaling can also be great fun and nostalgic by chronicling your daily life in a very personal way.
Record the events of an exciting kid’s sports game. Reflect on your job, the meetings you have, people you meet in a very personal, raw way that you would never say or record anywhere else. Find a journaling app where you can attach photos and audio files. Use tags to break out the personal journal, private entries from the chronicle ones that you may want to print and share with others.
Tip – When you write about anyone else write their full name and all the details you can. Reflecting back on year’s old entries this will be helpful.
I have been journaling on and off for years. Now I do it almost daily both for personal, work and fun.
There are many apps and methods from a paper notebook to full-blown software suites. I prefer something that I can journal anywhere with any device whether it’s my PC, phone or hit a website. Mostly it’s my phone.
I have switched apps and methods a few times over the years but I have landed on Journey as my goto. This is not a sponsored or affiliate post, just a recommendation on what I use and you may find useful for your journaling.
End of line.
Binary Blogger has spent 20 years in the Information Security space currently providing security solutions and evangelism to clients. From early web application programming, system administration, senior management to enterprise consulting I provide practical security analysis and solutions to help companies and individuals figure out HOW to be secure every day. | <urn:uuid:c1177ad1-a234-416d-86a4-632c8398d26c> | CC-MAIN-2022-40 | https://binaryblogger.com/2020/03/11/how-writing-a-journal-can-organize-your-life-and-mind/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00302.warc.gz | en | 0.947193 | 844 | 2.828125 | 3 |
Create a Value Type variable using direct assignment
Create a value type variable using direct assignment when the values are generally known and need to be retrieved repetitively for different commands in the task.
- In the Workbench, click the Variable Manager icon at the top or on the tab on the right side.
The Add Variable window is displayed, with the Create New Variable option selected.
Select type Value.
Generally, this is the default selection.
Enter a name for the variable. The name must begin with an alphabetic character and cannot contain spaces.
Select the Value radio button.
This is the default.
Specify an initial value.This value can be character or numeric.
After the variable is saved, it is displayed in the Local Variables section of the Variable Manager. | <urn:uuid:93f2361a-56d0-472b-be6a-860a509fa030> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-variables/create-value-type-variable-direct-assignment.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00302.warc.gz | en | 0.734932 | 187 | 2.75 | 3 |
Pop quiz: which of the following statements about decisions do you agree with:
- You need at least thirty data points to get a statistically significant result.
- One data point tells you nothing.
- In a business decision, the monetary value of data is more important than its statistical significance.
- If you know almost nothing, almost anything will tell you something.
Believing the first two statements will limit your effectiveness in using statistics in a business decision. The second two statements capture one of the important points in Applied Information Economics: small data is often very useful in decision making when there is great uncertainty. This article presents three examples of how a sample of just five data points can tip the scales in a business decision.
Example 1: length of employees’ commutes.
Decision: management is deciding on a proposal and wants to measure the benefits of the proposed organizational transformation.
In their business case, the variable “time spent commuting” has come back with a high information value. If the average time spent commuting is more than 20 minutes, then the decision has an acceptable ROI profile. They randomly select five people and ask them their commute times.
Using our “rule of five” the 90% confidence interval for the median of our population of employees is 20-55 minutes. Our 90% confidence interval for the mean of the population is 21.2 to 46.8 minutes. This was calculated using our Small Sample calculator found here.
[Wonk alert!] In the small sample calculator, we are using a simplifying assumption that the distribution is normally distributed, which obviously is not always the case. Even in the example given, it is unlikely that the distribution of drive times is normally distributed, but this still provides a reasonable approximation for a 90% range estimate for mean drive time.
Example 2: minor league, major decision
Decision: a baseball team manager needs to decide if he should send a player back to the minor leagues.
The manager has brought a player up from the minor leagues, and the player has had 5 at bats and zero hits. The manager has a minimum required batting average of .215 for players in their first year in the majors. Are five at bats without a hit enough data to be 90% confident the player should be sent back to the minor leagues?
For this type of data we would use an inverse beta distribution to calculate the 90th percentile of the distribution of batting averages. [Nerd panic! Note this isn’t quite the same as a 90% confidence interval which would be the range from the 5th percentile to the 95th percentile] Entering an alpha of 1 (no hits) and a beta of 6 (5 misses) returns a 90th percentile of .319. The manager can be 90% confident that the player’s batting average is below .319 but cannot be 90% confident that the player’s batting average will be less than .215. However, to get there requires just 4 more at bats with no hits. No pressure young man!
Example 3: Big Dig on a small scale
Decision: The Executive Team wants to improve project management by being better able to assess a 90% confidence range of development time based on engineers’ initial estimates.
The company has carefully tracked original estimates for five projects and can now compare them to actual duration:
Software Development Time
|Project 1||8 weeks||17 weeks|
|Project 2||22 weeks||42 weeks|
|Project 3||4 weeks||5 weeks|
|Project 4||3 weeks||9 weeks|
|Project 5||11 weeks||11 weeks|
If we want to get a 90% confidence interval for actual development time based on our data, how would we do that? We can start by plotting the 5 points on a scatter chart.
Based on a linear regression of these five points the actual time to completion is 177% of the initial estimate. Next we estimate a 90% confidence interval on the range for actual versus initial estimate. The ratios between the actual and predicted are: 213%, 191%, 125%, 300%, and 100%. Entering these values in the small sample calculator we get a 90% confidence interval for the average of 110% and 261%. So if the initial project estimate is 10 weeks, our best estimate would be 18 weeks and our 90% range would be 11 to 26 weeks.
Collecting data is all about resolving uncertainty. And in our busy work environment, we’re often expected to make the best conclusions in a limited amount of time. However, if we target the right variable we can improve our judgment with just a few data points.
So get out there and do some measurements! And reward yourself with better decisions. | <urn:uuid:ca373fb6-8fe1-4974-802a-6c85965da65e> | CC-MAIN-2022-40 | https://hubbardresearch.com/five-data-points-can-guide-decision/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00302.warc.gz | en | 0.917487 | 1,025 | 3.109375 | 3 |
Uninstall or Disable PGP Tools, Security Researchers WarnExploitable Vulnerabilities Could Reveal Plaintext of Encrypted Emails
European computer security researchers say they have discovered vulnerabilities that relate to two techniques used to encrypt emails and data: PGP and S/MIME.
The vulnerabilities "might reveal the plaintext of encrypted emails, including encrypted emails sent in the past," the researchers warn. And until the flaws get resolved, they recommend that everyone disable any tools that decrypt PGP emails by default.
There is not yet a full fix for the problem, says Sebastian Schinzel, a professor of computer security at Germany's Münster University of Applied Sciences, who's part of the research team - together with researchers from Ruhr-University Bochum in Germany and KU Leuven University in Belgium - that has found the flaws. The researchers have dubbed the flaws efail.
"There are currently no reliable fixes for the vulnerability," Schinzel says via Twitter. "If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now." In particular, he's recommends temporarily disabling PGP/GPG in Outlook, Apple Mail and Thunderbird.
We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. #efail 1/4— Sebastian Schinzel (@seecurity) May 14, 2018
PGP is short for Pretty Good Privacy, which was first released by Phil Zimmermann in 1991. He later created OpenPGP, an open source approach that is based on PGP and available via free software such as GPG, short for GNU Privacy Guard. Users can employ PGP-compatible email clients themselves, and many secure webmail clients also make use of PGP. Numerous email clients also support S/MIME - Secure/Multipurpose Internet Mail Extensions - for sending encrypted communications and digitally signing messages.
At Risk: S/MIME and OpenPGP Email
Full details of the implementation flaws were published on Monday in a research paper titled "Efail: Breaking S/MIME and OpenPGP Email Encryption using Exfiltration Channels."
The researchers say their proof-of-concept attacks "for both OpenPGP and S/MIME encryption" could allow attackers to exfiltrate data "for 23 of the 35 tested S/MIME email clients and 10 of the 28 tested OpenPGP email clients."
Vulnerable mail clients include the iOS mail app, native mail clients on Android, Outlook and IBM Notes running on Windows systems, Thunderbird on Linux, as well as online Exchange, according to the researchers. And affected webmail providers include FastMail, Gmail, GMX, Hushmail, iCloud Mail, Mail.ru, Mailbox.org, Outlook.com, Yahoo Mail, and Zoho Mail.
One secure email service, ProtonMail, which is named in the report, is not vulnerable to the Efail vulnerability.
"We would like to confirm that ProtonMail is not impacted by the Efail PGP vulnerability; the researchers themselves confirm this in their research paper" on page 11, spokeswoman Irina Marcopol tells Information Security Media Group.
"We also maintain openPGPjs, one of the world's most popular encryption libraries, which powers a large fraction of the PGP clients in existence today," she says. "Any service that uses our openPGPjs library is also safe as long the default settings aren't changed.
Encrypted email service provider Mailfence also says it is not vulnerable to the Efail flaws. "Mailfence is not impacted by the OpenPGP Efail vulnerability," the company says in a blog post. "Also, based on the mentioned issues in the technical paper, the OpenPGP protocol itself is safe to use, if you are not using it with a buggy email client."
'Take Action Now'
Security experts said the vulnerabilities would likely soon be targeted, and they recommended users follow Schinzel's advice immediately. Indeed, after any bug reports get published, attackers often begin exploiting the new flaws within hours.
"You need to take action now," says Alan Woodward, a professor of computer science at the University of Surrey.
PGP is awkward to use & to mess up but if you do rely upon it for your privacy & confidentiality you need to take action now https://t.co/siSbs1RjSp— Alan Woodward (@ProfWoodward) May 14, 2018
Mikko Hypponen, chief research officer at F-Secure, has called out researchers' warning that the flaws could be used to decrypt past messages.
This vulnerability might be used to decrypt the contents of encrypted emails sent in the past. Having used PGP since 1993, this sounds baaad. #efail— Mikko Hypponen (@mikko) May 14, 2018
Full details of the PGP and S/MIME implementation flaws were due to be released on Tuesday, when the researchers appear to have negotiated a coordinated vulnerability announcement with makers of vulnerable software.
But on Monday, Munich newspaper Süddeutsche Zeitung appeared to break that embargo. Shortly thereafter, the full research paper was released.
Attackers could automatically exploit the flaws by tricking victims' email clients. "In a nutshell, efail abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago," the researchers write.
"The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim's email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker."
Matthew Green, a professor of cryptography at Johns Hopkins University in Baltimore, has reviewed the researchers' work. "The result is really elegant," he tells Süddeutsche Zeitung.
Green already recommended not using PGP. In a 2014 blog post, Green wrote that "it's time for PGP to die," noting that it was time to build something much better. "Poking through an OpenPGP implementation is like visiting a museum of 1990s crypto," he warned.
In the wake of the new research, Green tells Süddeutsche Zeitung: "This is another bullet hole in an already perforated car."
Stop Sending/Reading PGP Emails
Süddeutsche Zeitung reports that although many of the affected vendors and software teams have had months to patch the flaws, they've run into challenges.
In the meantime, digital privacy rights group Electronic Frontier Foundation, which has reviewed the researchers' findings, confirmed that the bugs pose a risk to anyone using PGP and S/MIME and as a "temporary, conservative stopgap" recommends disabling any email plug-ins that automatically decrypt such messages.
"EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages," the organization says in a blog post.
"Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email," EFF says. "Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email."
Is Alert Overblown?
But some think the vulnerability warning is overblown. Werner Koch, a core components maintainer for GnuPG - a complete and free implementation of the OpenPGP standard - says he's seen a copy of the researchers' paper, with the names of all but one vulnerable mail user agent (MUA) redacted, notes that the flaws involve some HTML email clients' implementation of PGP.
Koch says the researchers found that HTML can be "used as a back channel to create an oracle for modified encrypted mails." In computer security, an oracle attack refers to an attackers being able to exploit a vulnerability to extract information from a target.
Koch says some MUAs' failure to block hidden HTML links are the problem.
"There are two ways to mitigate this attack," Koch writes in a Monday post to the GnuPG mailing list. "Don't use HTML mails. Or if you really need to read them use a proper MIME parser and disallow any access to external links." In addition, he writes, "use authenticated encryption."
But while that advice might be easier to implement for anyone who uses and configures their own PGP tools, it fails to address how the many different webmail providers, for starters, might handle these problems.
Some services that implement PGP, however, have emphasized that the problem isn't with the standard, but rather some implementations of it. "As the world's largest encrypted email service based on PGP, we are concerned that some organizations and publications have contributed to a narrative that suggests PGP is broken or that people should stop using PGP," ProtonMail's Marcopol says. "This is not a safe recommendation."
She adds that the Efail flaw has been a known PGP and S/MIME problem since 2001. "The vulnerability exists in implementation errors in various PGP clients and not the protocol itself. What is newsworthy is that some clients that support PGP were not aware of this for 17 years and did not perform the appropriate mitigation."
This story has been updated with comment from ProtonMail and Mailfence. | <urn:uuid:ea7bb92d-ada1-406a-be7d-8b8369525f3c> | CC-MAIN-2022-40 | https://www.govinfosecurity.com/uninstall-or-disable-pgp-tools-security-researchers-warn-a-11005 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00302.warc.gz | en | 0.937479 | 2,084 | 2.671875 | 3 |
Hybrid Programming Architecture
In the hybrid programming architecture, your code runs on both your traditional computing environment and the Pathfinder platform. Pathfinder and your existing architecture work as co-processors. You can continue working in your existing development environment while, in the background, Pathfinder accelerates data-intensive workloads and allows you to process vastly larger datasets. Pathfinder provides massive benefits when analyzing graphs, performing sparse linear algebra, or processing large, sparse datasets.
The Hybrid Lucata Programming Architecture
Programming for Pathfinder is similar to programming for a GPU. Your program runs on the stationary cores in your existing traditional computing architecture and issues calls to run data-intensive processes on Pathfinder. Calls are documented in the Lucata SC-LCE API library. An automated data loader transfers data from your existing database and formats it for processing on Pathfinder.
You will modify your program to take advantage of Pathfinder by:
- Identifying the main data structures and distributing them in Pathfinder using the provided memory allocation functions
- Identifying time-consuming computational parts and parallelizing them using Cilk functions and the C/C++ helpers. If you use OpenMP, these pieces should be identified already.
- Using atomic operations, remote atomics, and intrinsics, as needed, to ensure the results are correct and efficient | <urn:uuid:3b81b8c4-60be-4e02-82ab-80166e9e379b> | CC-MAIN-2022-40 | https://lucata.com/solutions/pathfinder/hybrid-programming-architecture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00302.warc.gz | en | 0.859519 | 261 | 2.65625 | 3 |
Flu Season Is Tapering but Cyber Viruses Are In Full Swing – Know the Strands
Flu season is currently in swing, and hopefully your firm has taken precautions to protect employees from illness, and has a Business Continuity Plan in place to ensure operations run smoothly in the event of an outbreak. In addition to being prepared for flu season, it is crucial that your firm be prepared for other types of virusus - those that infect your IT infrastructure! With cyber-attacks on the rise and becoming increasingly more sophisticated, firms need to take extra precaution to protect themselves against viruses and malware that can potentially harm their firm.
A virus is a program that can infect a computer system and replicate itself, allowing it to spread from one PC to another over a network. Typically, a virus will replicate itself by attaching to an executable file that is part of a legitimate application. When the user attempts to launch that program, this activates the virus, which enables it to corrupt or alter files on that computer and spread to other applications on the network. Viruses can also be spread via removable media, including USB drives, DVDs, and CDs.
Viruses and Malware to be Aware of:
Trojan: A Trojan horse is a malicious program that disguises itself as a legitimate application. The user initiates the program, believing it to be performing a desirable function, but it instead allows the invader to gain unauthorized access to the user’s PC and the information that is stored there.
Botnets: Norton defines botnets as a string of connected computers coordinated together to perform a task. Botnets are not necessarily used as a malicious technology, however they can be used to harm your firm. Botnets gain access to your computer, either through a manual hack or sometimes through a software program that scans the Internet looking for holes in security. Botnets often use a Trojan horse access your computer, they can then steal important financial or confidential information to cause your organization harm.
Scareware: Scareware is a type of malware that uses scare techniques and tactics to coerce users into exposing their computer to malicious software such as fake antivirus protection or fake virus removal.
Ensure that all anti-virus programs are up to date. Malware creators are regularly working to find ways to penetrate a firm's environment. At the same time, anti-virus companies have their teams working to identify the next malware code and update their software to protect against it.
Ensure that all patches are deployed in a timely manner. Malware creators are becoming more sophisticated, so it is important to have the most up-to-date versions of all security programs. Software providers, such as Microsoft, release new patches on a regular basis so make sure your systems stay up to date.
Deploy a program that constantly scans the network for malware and removes threats. Again, a number of vendors exist, so be sure to check with your IT resources to choose one that fits your firm's specific needs.
For more in-depth information on cybersecurity gaps and how to avoid them, download out FREE eBook, 10 Common Cybersecurity Gaps and How to Avoid Them.
For more information on how Eze Castle Integration can help your firm prevent cybersecurity attacks, contact us here. | <urn:uuid:a1c89d29-473a-4b49-9e3c-811080599fa4> | CC-MAIN-2022-40 | https://www.eci.com/blog/16015-flu-season-is-tapering-but-cyber-viruses-are-in-full-swing--know-the-strands.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00302.warc.gz | en | 0.939759 | 674 | 2.546875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.