text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
High profile cybersecurity attacks indicate that offensive attacks are overwhelming defensive measures. Even with management’s attention to system penetrations and data loss, risky incidents are still costly to the enterprise’s balance sheets.
Cybersecurity, privacy, and compliance people are asking, “How do we practically protect and defend our information and systems? How do we understand security frameworks and controls?”
This course provides students an overview to the security controls and cybersecurity hygiene defined in the CIS Critical Controls.
What You Will Learn
Introduction to Critical Security Controls
Cybersecurity attacks are increasing and evolving so rapidly that it is more difficult than ever to prevent and defend against them. Does your organization have an effective method in place to detect, thwart, and monitor external and internal threats to prevent security breaches? Does your organization need an on-ramp to implementing a prioritized list of technical protections?
In February of 2016, then California Attorney General, Vice President Kamala Harris recommended that “The 20 controls in the Center for Internet Security’s Critical Security Controls identify a minimum level of information security that all organizations that collect or maintain personal information should meet. The failure to implement all the Controls that apply to an organization’s environment constitutes a lack of reasonable security.”
SANS has designed SEC440 as an introduction to the CIS Critical Controls, in order to provide students with an understanding of the underpinnings of a prioritized, risk-based approach to security. The technical and procedural controls explained in the CIS Controls were proposed, debated and consolidated by various private and public sector experts from around the world. Previous versions of the CIS Controls were prioritized with the first six CIS Critical Controls labeled as “cyber hygiene” and now the CIS Controls are now organized into Implementation Groups for prioritization purposes.
The Controls are an effective security framework because they are based on actual attacks launched regularly against networks. Priority is given to Controls that (1) mitigate known attacks (2) address a wide variety of attacks, and (3) identify and stop attackers early in the compromise cycle.
The course introduces security and compliance professionals to approaches for implementing the controls in an existing network through cost-effective automation. For auditors, CIOs, and risk officers, the course is the best way to understand how you will measure whether the Controls are effectively implemented.
This Course Will Prepare You to:
Understand a security framework and its controls based on recent and evolving threats facing organizations
Prepare you to interpret a security framework based on data from publicly known attacks, breach reports, and large scare data analytics from the Verizon Data Breach Investigation Report (DBIR), along with data from the Multi-State Information Sharing and Analysis Center (R) (MS-ISAC(R)).
Understand the importance of each control, how it is compromised if ignored, and explain the defensive goals accomplished with each control
Identify tools that implement controls through automation
Learn how to create a scoring tool for measuring the effectiveness of each controls the effectiveness of each control
Identify specific metrics to establish a baseline and measure the effectiveness of security controls | <urn:uuid:dcf514ad-62b9-4d83-a1f2-7c061ce14e6f> | CC-MAIN-2022-40 | https://cybermaterial.com/sec440-cis-critical-controls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00589.warc.gz | en | 0.927949 | 638 | 3.1875 | 3 |
Microsoft has been awarded a high pressure data center patent that would allow for far more efficient heat transfer than traditional approaches.
"Some data centers employ heat sinks and electric fans (which can consume a substantial amount of energy)," the filing (Patent US10426062) states. "In cases of higher power levels liquid cooling circuits and liquid immersion baths (which can be expensive and error-prone), more expensive heat sink solutions, etc. are required.
"What is needed is a system to efficiently improve data center cooling without needing expensive additional hardware."
That system, the patent believes, is a hermetically sealed data center full of high pressure gas.
Microsoft declined several requests for comment, but the patent filing gives a lot of detail. Higher pressure makes air denser, and increases its heat capacity - and therefore the amount of heat it can remove from IT systems. The patent envisions a choice of gases under different pressures. "The gas might be pressurized, according to some embodiments, from at least substantially 2 times standard pressure to substantially 5 times standard pressure," the patent notes.
Gases mentioned include normal air (containing nitrogen, oxygen, argon and carbon dioxide), or inert gases such as pure nitrogen (N2), carbon dioxide (CO2), sulfur hexafluoride (SF6), and combinations of the above.
In an attached diagram, Microsoft showed the potential benefits of different gases at different pressures on heat transport and fan power:
Sulfur hexafluoride is an inert gas with a high molecular mass, already used as a dielectric medium for high-voltage circuit breakers, switchgear, and other electrical equipment.hexafluoride. Microsoft's patent says that, in a hermetically sealed data center filled with SF6, fans can be powered at 25 percent of standard levels, while heat transport is nearly seven times as effective.
However, although SF6 is non-flammable and non-toxic, it is the most potent greenhouse gas that the Intergovernmental Panel on Climate Change has evaluated, with a global warming potential of 23,900 times that of CO2 over a 100-year period. It has an estimated atmospheric lifetime of 800 - 3,200 years.
Any unit, even a hermetically sealed data center would inevitably leak a small amount of the gas. Leaks at industrial sites have happened, with SF6 levels increasing every year. According to electrical company Eaton, which manufactures switchgear without SF6, leaks could be as high as 15 percent for switchgear over its lifetime.
One of the largest single users of the gas, the Department of Energy, discovered that it too was leaking huge quantities of SF6 into the atmosphere, but managed to reduce the amount after extensive work.
Microsoft's patent does not delve into the risk of leaks, but details a gas management system that "might, according to some embodiments, automatically perform pressure sensing (is the pressure too low or too high?), gas composition sensing (how pure is the gas?), pressure control (to increase or decrease the pressure), gas composition control (to adjust the composition), and/or control of the human access safety door (e.g. to ensure operator safety).
Increased pressure or changing the gases in the data center suggests this may be intended for a lights-out facility with minimal need for maintenance. However, humans will need to access the data center for servicing and upgrades, requiring a system of "human access safety doors," pressure control units and a gas management system, to make the server room safe for a human.
It is worth noting that Microsoft already has an advanced project involving a low-maintenance data center in a pressure vessel: the Project Natick research initiative.
The company has submerged a shipping container-sized prototype data center off the coast of the Orkney Islands, Scotland. A 12-rack cylinder, Natick is filled with nitrogen gas. However, the project's website notes that the system is operated at one atmosphere.
The "high pressure, energy efficient data center" patent does not discuss the possibility that the data center is submerged, and gives the impression that the human access safety door separates the facility from outside breathable air.
The patent is credited to Winston Saunders, manager of advanced data center development at Microsoft. Previously, he spent more than 20 years at Intel across various roles, including as the director of data center power initiatives. | <urn:uuid:afd849ee-537c-4b13-91db-a89867da5f56> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/microsoft-patents-high-pressure-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00589.warc.gz | en | 0.927533 | 909 | 3.015625 | 3 |
The machine-learning-as-a-service platform is a collection of cloud-delivered ML solutions, tools and technologies that accelerate the delivery of solutions that help the Food and Drug Administration deliver on its regulatory mission.
To speed and increase the accuracy of data input, the Food and Drug Administration developed a machine-learning-as-a-service (MLaaS) platform.
Machine Learning as a Service Platform
Food and Drug Administration
The platform is a collection of cloud-delivered ML solutions, tools and technologies that accelerate the delivery of solutions that help FDA deliver on its regulation mission. It includes pre-built models, algorithms and robotic process automation (RPA) that address many FDA use cases using computer vision, image classification and natural language processing (NLP).
The platform came about as a way to address the manual, labor-intensive data input process FDA has used for the large amount of handwritten documents and forms it receives, some of which are in different languages. They arrive in many forms, including PDFs and Microsoft Word and Excel documents, or as pictures from smart devices or scanned images, which may be blurry or at low resolution, and can contain tables and reports.
“One of the challenges that we faced was the amount of time that it takes to get some sort of structured data out of those forms and then really do something with that data in terms of our public health mission,” FDA’s acting CTO Sohail Chaudhry said
The platform is an iterative solution that can automatically recognize a form and its type, identify the handwritten content and then digitize it. What’s more, it can translate foreign languages into English, extract key information and apply it to a downstream application.
The MLaaS solution is application-agnostic, meaning end users don’t have to have a specific program or solution to use it. “It’s built on a microservice architecture. The platform uses a serverless container, so it’s extremely lightweight; you don’t even need a server to start using it,” Chaudhry added.
Rather than have various FDA components develop and use their own ML and AI tools, they can tap into the MLaaS engine that has been matured over time, which not only provides standardization, but controls costs.
The MLaaS also has a scoring function that indicates how confident the platform is about its correctness. “There’s an impact based on the decisions made off of it, so we not only have the machine give us these predictive analyses, but we also have a confidence score that we allocate,” Chaudhry said. “When a human looks at it, they can tell that it’s done by a machine, not a human.”
Work on the MLaaS platform, which is hosted in FDA’s cloud, began about a year ago. To use it, employees select what they want from the Office of Information Management and Technology’s service catalog, and it gets published to the user through an application programming interface.
“The fact that it’s offered in our pre-authorized cloud, the solution itself is low in cost, [and] it increases our deployment flexibility because most of our next-gen services and applications are being deployed in the cloud,” Chaudhry said.
One lesson learned from the effort is the importance of making changes as they arise. For instance, if an algorithm interprets a physician’s handwritten “COVID” as “could,” an adjustment is necessary. “As we are finding out things that are not right, we rectify it, and once we rectify it in one area, it has a downstream effect and fixes itself all across,” Chaudhry said.
MLaaS is part of a larger effort at FDA to find standardized, enhanced ways of implementing purpose-built technologies – “not just doing technological changes for the sake of doing a change, but bringing AI, ML, RPA and NLP into our ecosystem with a desire to make it purpose-fit,” he said.
The next iteration of MLaaS involves plugging in FDA’s existing low- to no-code workflow automation tools. After that, Chaudhry said he will turn his attention to blockchain-as-a-service.
With any as-a-service tool, the goal is to make it generic enough to be used for many use cases regardless of the office that wants to use it.
“The concept of X as a service really gets operationalized in an agency like FDA because the need is not limited to one specific business office or center,” he said. “The benefits of using these capabilities and solutions, they just go off the spectrum.”
NEXT STORY: DOD smooths funding path for entrepreneurs | <urn:uuid:9e3880b2-2632-416d-b875-9bc408161b6a> | CC-MAIN-2022-40 | https://www.gcn.com/cloud-infrastructure/2021/11/machine-learning-service-streamlines-data-input-for-fda/316487/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00589.warc.gz | en | 0.947071 | 1,020 | 2.671875 | 3 |
People often associate hacking or hackers with a crime. However, the field has different variations, each having an individual goal and purpose in the hacking world. The people involved in stealing information and hacking into apps are known as black hat hackers, while white hat hackers are the ones that protect systems and people. There is a third category, which acts as a middle ground known as gray hat hackers.
So what are black hat hackers? What is the difference between black hat and white hack hackers? Why being a gray hat hacker is more complicated? If you have these questions in mind, you have landed on the right page. This article will answer all your burning questions and help you to understand the different hacker types.
So without further ado, let us begin with our conquest.
Black Hat Hackers: The criminals
Black hat hackers are the ones who steal information and hack into various systems to gain access to classified data and information. This type of hacking consists of criminals with high levels of coding expertise. They use their hacking skills to commit several crimes, which can vary from hacking into personal accounts or gaining access to secret government information or sensitive data.
Black hat hackers are professionals and highly skilled coders with years of expertise in the coding field. However, other categories, such as opportunists, are experts in convincing consumers to share sensitive data using behavioral engineering.
There are many scenarios where you may find the involvement of black hat hackers. Some of the typical motives behind black hat hacking include:
- Money – The ultimate goal of a black hat hacker is to steal money or financial secrets. Information about your credit card, bank accounts, and financial transactions are the primary target of black hat hackers.
- Accessing private information – Accessing confidential information is one of the most common scenarios involving black hat hackers. These hackers gain access to private data for personal vendetta or to seek information for financial gain.
- Hacking into finance information – Finance information of individuals, corporations, or government departments are the most lucrative assets for any black hat hacker. They will target anyone with big monetary pockets, primarily corporate and people dealing with government finance.
- Gaining access to property secrets – Alongside money, corporate properties’ financial information is a valuable target for black hat hackers.
White Hat Hackers: The protectors
Unlike black hat hackers, where everything starts and ends with stealing information and secrets, white hat hackers use their hacking skills to prevent these threats. White hat hackers work for different organizations, including government and business entities. They secure the infrastructure by regularly updating security protocols and sealing loopholes in the system.
White hat hackers mostly receive training from military or authorized hacking institutes, making them an excellent choice for any organization. Employing a white hat hacker will ensure that your information stays protected without breaking the law.
Here is the list of roles that a white hat hacker offers:
- White hat hackers monitor in and out traffic to check for signs and loopholes of hacking
- They develop and test patches to ensure safety and prevent security holes
- Identifying vulnerabilities and problems in the security system
- Hacking into your app or server to patch loopholes
- Testing and having up to date knowledge of the latest hacking tools and software
- Monitoring rival business apps for potential breach of data or information
- Performing tasks without breaking the law
White hat hackers work by the rules and keep everything intact for their employers. However, they are still hackers, and since they have all the access to your network, they can use this information to gain financial benefits. It would be best to do a background check before hiring someone.
Gray hat hackers: The tricky ground
Gray hat hackers are experts in camouflage. They can blend into black hat hackers from white hat hackers or vice versa according to their needs. Sometimes many consider grey hat hackers as black hat hackers as most of their work involves stealing information and data. However, if a white hat hacker strikes another fellow hacker, we can consider it an example of grey hat hacking.
Many organizations and businesses hire grey hat hackers to hack into their competitor’s network to steal valuable information. This information helps an entity to stay ahead of its competition and manipulate the market. Grey hat hackers act as a gold mine for these organizations, as they provide them with all the insight and plans of their competitors.
There are several versions or scenarios where grey hat hacking occurs. These versions include:
- Hacking into another’s server to improve security by illegally downloading code
- Transforming from a white hat hacker to a black hat hacker according to requirements
- Breaching other company’s data to strengthen your organization’s security and patch loopholes
- Intelligence agencies hacking into each other’s government data and information is also a part of grey hat hacking
- Hacking into government servers to access criminal records for identifying criminals
Grey hat hacking sits in between black hat and white hat hacking. Some grey hat hacking scenarios may seem related to black hat hacking, while others involve white hat hackers striking another white hat hacker.
Hackers use many tools and techniques to steal your information. Some of the standard tools hackers use are Rootkits, Keyloggers, and Vulnerability Scanner, while SQL Injection Attack and Distributed Denial-of-Service (DDoS) are the most common hacking techniques. | <urn:uuid:218b9f05-3825-43f9-bd0a-785af0744451> | CC-MAIN-2022-40 | https://www.hackerscenter.com/2022/03/31/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00589.warc.gz | en | 0.899246 | 1,086 | 2.5625 | 3 |
SAP number ranges are used to enable assignment of unique numbers to identify different master data and transactional data objects in SAP. Number ranges is a simple but important concept in SAP that is central to how SAP software works.
In SAP, there are many objects defined for each module. As an example, if we take the production and planning module, plan orders and production orders are key objects. In Plant Maintenance, equipment, notifications, maintenance plans are some of the objects. If we take the Material Management module, material master, purchase orders are some of the key objects. From the sales side, sales orders and sold to party are some examples with number range object assignments. These objects are uniquely identified using a number.
This number is selected from an SAP number range. We define number ranges for each object and a unique number from this number range will be assigned for the object. When transactions are carried out in SAP for each of these objects, system allocates a number for the data.
Types of SAP Number Ranges
There are two types of number ranges:
- Internal SAP number ranges – In internal number ranges, system selects the next available number from the number range allocated for the object automatically. For objects such as production orders, purchase orders we use these type of number ranges.
- External SAP number ranges – When external number ranges are used, the person who is creating the object must enter the number. The number which is entered must be within the number range allocated for the object. System will popup a message if the number given is invalid. For objects such as equipment, material numbers, we use external number ranges usually. Deciding whether to use internal or external number range solely depends on the business requirements and the practicality of using it.
SAP Number Range Maintenance
We will use the production order number ranges as an example for the balance of the document. To create production order related number ranger, follow the below SPRO path or directly call the transaction CO82.
SAP IMG -> Production -> Shopfloor Control -> Master Data -> Order -> Define number ranges for orders
When we go into this screen, we can see the number range object which is relevant for production orders. It is AUFTRAG. Number range is always bound to the number range object. In this screen we have three options.
- Display Intervals
- Edit Intervals
- Set NR Status
By selecting the “Display intervals” option we can see all the number ranges which are created under the AUFTRAG object. All the number ranges defined here can be assigned against different production order types which we have defined.
From the “Edit intervals” option we can edit the number ranges which are already defined. If the number range is already in use, it is not possible to edit because it will lead into inconsistencies. The third option is to “Set NR status”. This field represents the last number which has been assigned from the number range. After defining the number range, we can set the NR status manually. When this is set, the system will start assigning the numbers starting from the number that is maintained under the number range status. Maintaining this is not mandatory. It depends on the business requirements.
How to Create SAP Number Range
To create a new number range, click on Goto -> Groups -> Change. This will direct to a new screen where all the number range groups defined are listed with the order types assigned for each group. On the top we can see the unassigned order types.
SAP Number Range Group
To create a new number range group, click on the “Create group” button. This will open a screen where you need to enter the group name and the number range that needs to be linked to the group. Number range consists of from number and to number. Range should be given in a manner where inconsistencies are not generated. Once the range is given, we can save the number range group. If the number range is external, then there is a checkbox with the label “External”. When this is selected, the number range will be considered as an external number range. As explained earlier, if the number range is external system will not assign the number automatically. It must be given by the user.
Once we save the group, next we can assign the order types to the group. For this we need to click on the “assign elements to group” button. After assigning the element to the group we can save the element. After doing this we can create a production order and check if the correct number is assigned from the number range group.
Transaction SNRO for SAP Number Ranges
Transaction SNRO can be used to maintain SAP number range objects. Advantage of this transaction is that we can maintain number ranges against any element irrespective of the module that element belongs to. When we execute SNRO, we can see a field to enter the object name. As discussed earlier, if we know the object name, we can directly input it here. If we enter AUFTRAG, we can see all the production order related number ranges defined.
Custom Number Ranges
In addition to this, we can also define a custom number range object. Custom number range objects are mainly used for customizations done in SAP. This reduces the complexity of assigning numbers in the customized programs.
To create a custom number range object, give the object name and click on the “Create” button. This will open a screen where you need to enter the object name and the length. We can enter a buffer value also. Sometimes we see that certain numbers in the number range are skipped. This is due to the buffer value maintained in the number range.
We can also give a warning percentage. The objective of this is that, when the number range is nearing its exhausting point, system will check the available numbers quantity with the waring percentage. If the available numbers quantity falls below the percentage, system will issue a waring message. This early warning method is useful specially for number ranges linked with material requirement planning.
We have discussed the objective of maintaining number ranges and how to use the number ranges in production orders. This concludes the article on SAP number ranges.
Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully, it’s something we can address for you in the improvement of our free SAP PP tutorials. | <urn:uuid:a1cc34f1-5239-4a3a-80c6-87d7aaa0a3b9> | CC-MAIN-2022-40 | https://erproof.com/pp/sap-pp-training/sap-number-ranges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00589.warc.gz | en | 0.867 | 1,336 | 2.578125 | 3 |
Power generation using wind energy has been on the rise for the past few years, particularly in offshore locations of North America, Europe, and Asia Pacific. The rise of offshore wind energy generation is attributed to the better yields due to the high offshore wind speed. A turbine in 15-mph wind speed can generate double as much energy as a turbine in 12-mph wind speed can. Offshore wind speeds are steadier than on land wind speeds because of the large open spaces at sea.
As the number of offshore wind energy installations is on a rise year on year, the size of offshore wind assets has also been increasing due to the advancement of offshore wind energy equipment and technologies. The current wind turbine hub height has reached to a total of 164 meters.
Figure 1: Top 15 Countries with Deepwater Offshore Wind Resource Potential (GW) in 2018
The offshore wind industry is trying to move towards a whole system approach, where various stakeholders like the governments and consumers of wind energy will work closely together. Even the oil and gas majors are making serious efforts in power generation using renewables, especially wind energy, to reduce carbon footprint and manage energy transition. In 2017, ExxonMobil signed power purchase agreements (PPAs) with Ørsted, a company offering development, construction, and operation of offshore wind farms, for two 250 MW Permian Solar and Sage Draw Wind projects, which are expected to be completed in Q2 2021 and Q1 2020, respectively. The purchased power, a combination of solar and wind power supply, will be designated to maximize round-the-clock supply of power for oil and gas operations in Permian Basin and in Northern Texas, US.
MarketsandMarkets™ View Point:
Ajay Talyan – Analyst : Energy and Power at MarketsandMarkets™, shares his point of view as mentioned below:
Wind energy offers clean and commercially viable energy.
- Large offshore open space can be effectively utilized
- New industry for job creation
- Low cost, reliable power for offshore regions
- Opportunity for countries to export surplus power to neighboring regions
- Can be easily integrated with energy storage technologies such as power to gas
In accordance with the rise in offshore wind energy projects, MarketsandMarkets™ has already done deep dive studies dedicated to offshore wind energy ecosystem, such as the global offshore wind market, the global offshore support vessel market, and the submarine cable system market, among others. The global offshore wind market is projected to grow at a CAGR of 15.3%, from 2017 to 2022, to reach a market size of USD 55.1 Billion by 2022. The turbine segment is projected to dominate the market. This is mainly because it contains the most important components such as nacelle, rotor and blades, and tower, which help to generate electricity.
The global offshore support vessel market is expected to grow at a CAGR of 5.0%, from 2018 to 2023, to reach a market size of USD 25.7 Billion by 2023. North America is a fast-growing market for offshore support vessels during the forecast period. The growth in the deployment of offshore wind farms in countries such as China and the US would drive the offshore support vessel market, for installation, maintenance, and the replacement of offshore wind turbines.
The submarine cable system market is expected to grow from USD 11.7 billion in 2018 to USD 20.9 billion by 2023, at a CAGR of 12.3%. New offshore wind capacity additions and high demand for inter-country and island connections are the key factors driving the growth of the submarine power cable market. Europe is expected to lead the global submarine power cable market by 2023. The market size in this region can be attributed to the booming offshore wind industry, of which submarine power cables are one of the critical components.
There have been new capacity additions in offshore wind power generation in 2018. According to the Global Wind Energy Council (GWEC), China, the UK, Germany, Belgium, and Denmark are the top five countries in new offshore installations.
Table 1. Top Countries with New Offshore Wind Power Installations in 2018
Source: GWEC and MarketsandMarkets™ analysis
These counties have also planned for more capacity additions in the near future.
The Jiangsu province government in China announced plans to invest USD 23.5 Billion in an offshore wind power project with an installed capacity of more than 10 GW. China had a total installed wind capacity of around 208 GW in 2018 and has plans to increase this capacity to 210 GW by the end of 2020. The country is also expected to implement auctions by the end of 2019. Thus, the share of installations originating from the market-based mechanisms is expected to rise after 2020, when the first of the auction-based volumes will be installed in China. The Chinese companies have continued focus on research and development activities. For instance, Goldwind introduced an 8-MW turbine for projects on the southeast coast of China in 2018.
The UK, with the second highest number of offshore wind installations in 2018, has further plans to achieve a target of 30 GW of installed wind capacity in its waters by 2030, from under 8 GW in 2018. The Department for Business, Energy and Industrial Strategy (BEIS) envisages that offshore wind would be supplying around a third of British electricity by 2030. The wind power sector is expected to contribute to low carbon power generation sources producing 70% of the demand by the end of the next decade. For the first time, renewables would be supplying more electricity than fossil fuels in the country. The UK government has plans for biennial tenders from 2021 to award further offshore wind farms.
Auctions in Germany continue to fetch ultra-competitive prices. The second tender for the offshore wind farms in Germany once again included a project bidding for 0.0 EUR/MWh support, which is a repeat of zero priced bids of the first auction round carried out in 2017. It is also stated that the project will receive only the wholesale price of electricity with no further support/payment. This proves that the offshore costs have come down. In Germany, offshore wind targets are expected to increase to 20 GW by 2030 from the current installed base of less than 6.4 GW. In 2019, RWE announced its plans to add 2−3 GW of new clean energy capacity each year as it launched its new renewables division. The company owns active wind power projects in the UK, Germany, the Netherlands, Belgium, Austria, the Czech Republic, and Spain in Europe.
In early 2019, a new legislation was introduced in Belgium for offshore wind energy auctions. According to the law, a competitive bidding procedure would be adopted to award domain concessions for new offshore wind farms. The new law further reduces subsidies granted to offshore wind electricity production while recognizing that the new wind farms are essential to achieving Belgium’s renewable energy targets under its EU and international commitments. The key objective of the law is to organize tenders by 2022, so new offshore wind installations can become operational by 2025. The concessions awarded under the law can last for a maximum duration of 30 years, including construction, exploitation, and decommissioning phase. Any support granted to wind energy producers is limited to a 15-year duration. The proposed offshore wind zone in the North Sea is expected to have a capacity of about 1,750 MW.
The Danish Energy Agency introduced the country’s first technology neutral renewable energy tenders in 2018. The agency has plans to increase the total share of renewables (RES) to 43.6% by 2021 from 40.0% in 2018. The target is expected to be fulfilled by the deployment of onshore and offshore wind and biomass energy. The development in the consumption of wind power reflects the projected net offshore and onshore wind power deployments of 1950 MW by 2021−22. Of this, offshore wind farms will account for 1366 MW (Kriegers Flak, Horns Rev 3, Vesterhav Nord/Syd), while onshore wind farms will account for 584 MW (net). To fulfill the (RES) outlook in 2018, wind developer Ørsted lifted its 2025 target for offshore wind capacity from 11−12 GW to 15 GW as part of a DKK 200 billion (€26.8 billion) investment in renewable projects. The Danish energy ministry also launched an investigation into the potential areas for the next three offshore wind projects by 2030. In 2019, the Centre for Electric Power and Energy at the Technical University of Denmark (DTU) is leading a research project to determine the technology needed for an artificial island to connect North Sea wind farms in Denmark to the surrounding countries.
The major utilities will play a key role in the development of these wind energy projects.
Table 2. Key Utilities in Offshore Wind Farm Operations
Source: MarketsandMarkets™ analysis
The cost of offshore wind energy has come down a lot in the recent past because of the advancements in technology, the incentives provided by the national governments during wind auctions, and the creation of new favorable laws. In South East Asia, South Korea, Taiwan, and Japan are the key countries for the growth of offshore wind power because of the huge investments in wind projects and the development of progressing supply chain. In Vietnam, a lower FIT of 98 USD/MWh is expected to boost the development of the offshore wind market. The Japanese government passed a new offshore wind law in 2018, according to which there is a mandate to define several areas for offshore wind development. The World Bank has also taken up the initiative of creating a financing stream to de-risk wind power projects in developing countries. OEMs, such as GE and Vestas, have developed turbines of higher capacity for offshore power generation. GE’s Haliade-X 12 MW offshore turbine is expected to be launched for commercial operations in 2024 – 25, and Vestas upgraded its wind turbine to 10 MW in 2018. Hence, the combined efforts of OEMs, Utilities, and governments will play a key role in the strong development of offshore wind energy in the future. | <urn:uuid:b2a20f76-ebc3-4f04-97ea-8968861ea7c8> | CC-MAIN-2022-40 | https://blog.marketsandmarkets.com/offshore-wind-holds-the-key-for-competitive-zero-carbon-and-large-scale-power-generation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00589.warc.gz | en | 0.943869 | 2,057 | 2.65625 | 3 |
We live in a developing world where things are always going to change, and change can happen at any time. Over the last decade, a lot of significant developments have come to our attention. In fact, technology is one area that is constantly growing. In a way, you could say that we are slowly entering a technological era being that tech is just about everywhere. There are a lot of things in the world that we wouldn’t be able to do with the tech we have now. Now, while this may be the case, there are a number of other elements of the matter that has to be thought of. In many cases, the majority of the things we do are online, which brings up the subject of cyber crime.
The subject of cyber crime isn’t often discussed, and it’s shocking considering the fact that most of the things we do daily are through the internet. Being that it’s not often talked about not a lot of people have a clear understanding of the importance of cyber security.
The issue of cyber security only seems to come up when something happens at a national level. The matter goes much deeper when it comes to cyber crime than many may think. Just about every day cyber criminals are victimizing people who deal with a number of daily life aspects online. Over the last few there have been headlines about the lack of cyber security. A hacker does a number of things including stealing social security numbers from computer systems, companies, and even well-known corporations. Without the right protection, a hacker can steal a number of things online (numbers, passwords, accounts, etc.) and it’s dangerous. The need to keep online information safe is a growing concern, even for minor businesses.
There’s a lack of cyber security awareness can make just about anyone an easy target for any cybercrimes. Which why it’s important that we all have a clear understanding of why it’s important to have the right protection regarding any matter.
Why is it important?
Now, this is a frequent question that a lot of people find themselves asking when the subject is presented. Well, cyber security keeps any electronic data and computer systems safe. Essentially, this is done through a number of practices and technology. The world is full of online businesses and social loves, meaning this is a big deal. The field of cyber security is continuing to grow which can be great when dealing with cyber crime. Which leads to the next frequently asked question, why is it so important to this era? There are a few standout downfalls that come along with being hacked, and its deeper than a confidential data threat. A lot of business relationships (customers and partners) have been destroyed because of hacking. On top of that, it can even cause problems from a legal standpoint.
If not careful, the dangers of cyber crime can become more and more critical, especially with the technology we have now. Plus, you have to keep in mind that new tech can come in the future too. There’s a chance that everyone can be affected by these problems because viruses can spread pretty quickly. A single device hack can corrupt a whole system and that system could potentially be compromised and taken.
Being vulnerable to cyber criminals isn’t something that anyone should want, so changes have to be made in order to prevent it. Having the right protection online can benefit you, businesses, and major corporations too. No matter the case, everyone should look to make sure they are protected as much as possible. | <urn:uuid:7cf36d6d-7c9b-443a-a5da-aee31bea52a5> | CC-MAIN-2022-40 | https://www.crayondata.com/the-age-of-digital-why-cyber-security-should-no-longer-be-ignored/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00589.warc.gz | en | 0.967198 | 719 | 2.875 | 3 |
Ohio State Researchers Develop Smart Wind Sensor for Drones
A team of researchers at the Ohio State University has designed a smart wind sensor that could one day be used on drones and other small, autonomous aircraft, which are poised to become fixtures of daily life.
As these unmanned aerial vehicles (UAV) become increasingly prevalent, keeping the air space they use safe will become a priority, and wind sensors — anemometers — play a critical role in ensuring the safety of the simultaneous takeoffs and landings that are expected.
Current anemometers like the pitot tube are mostly unsuitable for UAVs — smaller ones in particular — because of high power consumption, aerodynamic drag, complex signal processing and expense.
The Ohio State anemometer seeks to fill this technology gap by using smart materials and an aerodynamic shape, according to a research paper, “Airfoil Anemometer With Integrated Flexible Piezo-Capacitive Pressure Sensor,” which was recently published in the journal Frontiers in Materials.
The device is airfoil-shaped, like an airplane wing, and contains integrated sensors to detect wind speed and wind direction through the movement of the sensor, which operates much like a wind vane, automatically orienting itself in the direction of the wind.
The current model airfoil is designed to operate in a smart-tether system and is suitable for tethered kites, balloons, drones and other aerial vehicles. It is shaped like a sleeve and fits over the tether, integrating sensing, data processing, wireless communications and energy harvesting for fully autonomous operations, the researchers wrote.
Wind speed is detected via a dual-layer, capacitive pressure sensor with a polyvinylidene fluoride (PVDF) diaphragm, while wind direction is measured by a 3D digital magnetometer that senses the orientation of the airfoil relative to the earth’s magnetic field.
The smart material, PVDF (trade name Kynar), is a thermoplastic fluoropolymer that is often used in lithium-ion batteries and high-end paint for buildings, including the Petronas Towers in Malaysia. It has piezoelectric properties, meaning it can produce electricity when under pressure.
The PVDF film used in the diaphragm harnesses these piezoelectric properties by reacting to changes in air pressure, and the sensor, in turn, uses the voltage changes generated by the film’s reactions to measure wind speed. So far, the sensor has successfully been tested in a pressure chamber and a wind tunnel.
One of the study’s authors, Marcelo Dapino, told TechXplore that as his team’s research moves from the lab to the real world, they hope to see the anemometer used in applications besides aircraft, like wind turbines.
“These are very advanced materials, and they can be used in many applications,” Dapino told the publication. “We would like to build on those applications to bring compact wind energy generation to the home.”
Ohio State’s Arun Ramanathan and Leon Headings are the study’s co-authors. | <urn:uuid:16d46282-9f79-4944-adec-9c94e31b9e2e> | CC-MAIN-2022-40 | https://www.iotworldtoday.com/2022/08/22/ohio-state-researchers-develop-smart-wind-sensor-for-drones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00789.warc.gz | en | 0.924408 | 660 | 3.046875 | 3 |
NASA has selected John Tomsick of the University of California, Berkeley, to lead a $145 million space telescope mission that will study star birth and death, as well as chemical formations in the Milky Way.
The Compton Spectrometer and Imager (COSI) is a gamma-ray telescope that will launch in 2025 through NASA’s Astrophysics Explorers Program, the space agency said Tuesday.
COSI will examine the gamma rays of radioactive atoms made from large star explosions. The mission seeks to locate where in the Milky Way chemical elements formed.
NASA also wants the mission to uncover the origin of positrons, subatomic particles whose mass is equal to that of electrons but have a positive charge.
The agency selected COSI out of the Astrophysics Explorers Program’s four finalists, which performed mission concept studies before the final selection. The program, which supports scientific investigations, gathered a total of 18 proposals in 2019.
NASA’s Maryland-based Goddard Space Flight Center performs managerial duties for the program. | <urn:uuid:305cfc00-5405-4d77-bf09-88f354266333> | CC-MAIN-2022-40 | https://executivegov.com/2021/10/nasa-names-selection-for-new-145m-astrophysics-mission/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00789.warc.gz | en | 0.906801 | 216 | 2.640625 | 3 |
A hacker was able to infiltrate the water system of a Florida city using a remote access software platform that hadn’t been used in months, according to news reports.
The hacker accessed the City of Oldsmar’s water treatment system twice last Friday – once in the morning and once in the afternoon – via remote access software TeamViewer.
According to news reports, officials still aren’t clear on how the malicious actor first gained access to the system, but it could be through compromised credentials, as the system requires a password to be controlled remotely.
CNN, quoting local officials, reported that the hacker attempted to essentially poison the city’s water supply.
Once inside, the hacker adjusted the level of sodium hydroxide, or lye, to more than 100 times its normal levels, Gualtieri said. The system’s operator noticed the intrusion and immediately reduced the level back. At no time was there a significant adverse effect to the city’s water supply, and the public was never in danger, he said.
According to the Tampa Bay Times, sodium hydroxide is used to regulate acidity levels, but it can be dangerous to humans at high levels.
This kind of attack is one that keeps cybersecurity experts up at night. Attacks on infrastructure like water systems can impact millions or citizens as opposed to nation-state attacks that target corporate and government networks.
According to the Associated Press, there are 151,000 separate water systems in the U.S., many of which operate in cities with small IT staffs, and some have no dedicated security staff at all. Water utilities – especially when publicly owned – are prone to funding issues that makes them a soft target for cyber attacks.
As the computer networks of vital infrastructure become easier to reach via the internet — and with remote access multiplying dizzily during the COVID-19 pandemic — security measures often get sacrificed.
“It’s a hard problem, but one that we need to start addressing,” said Joe Slowik, senior security researcher at DomainTools. He said the hack illustrates “a systemic weakness in this sector.”
The hack wasn’t all that sophisticated, cybersecurity experts say. The AP reports that a supervisor monitoring a plant console saw a cursor move across the screen to change settings, and the hacker was inside the system for all of five minutes.
Utilizing remote access software like TeamViewer is a common tactic for hackers seeking the path of least resistance. Simply compromising a user’s credentials to access these platforms gives hackers the keys they need to wreak havoc in an organization’s internal systems.
Despite the apparent lack of sophistication, the intruder was dangerously close to affecting the drinking water for an entire city. It’s time for municipalities – who control so many critical public systems – adequately invest in cybersecurity defenses. | <urn:uuid:0f7a5735-87e6-420c-b8b9-ad49d00e0b86> | CC-MAIN-2022-40 | https://mytechdecisions.com/network-security/florida-citys-water-treatment-system-hacked-using-remote-access-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00789.warc.gz | en | 0.945633 | 592 | 2.546875 | 3 |
For a variety of reasons, a consumer’s personal data is extremely valuable to organizations. Data is, at its essence, a resource. Information has always been valuable throughout history. From covert meetings to strategic placement, the side with the most information, the best understanding of the playing field, and the ability to alter their tactics in response to that knowledge will win. The way your data is used and valued is largely determined by the organization's goals. Various platforms, businesses, and even criminals make use of that resource in a variety of ways.
The way you use your customers’ personal data matters. Just as well, the way you protect that personal data also matters. Organizations are subject to compliance with the GDPR, a European-based but globally relevant law that notes how personal data should be used and protected by an organization.
In this guide, we’ll explore what personal data is, why it is valuable, its relationship with the GDPR, and how to protect your customers, or your own, personal data from criminals. Let’s start by defining personal data.
The best definition of personal data was originally written by the EU’s General Data Protection Regulation, or GDPR. The GDPR’s definition of personal data is the one most relevant to business and organization leaders that deal with personal data.According to the GDPR, any information relating to a recognized or identifiable person is referred to as personal data. This is the simplest way to define personal data, but it can actually be a lot more complex than that.
The owners of personal data are considered “identifiable” if they can be identified directly or indirectly by some piece of information, for example, by a name, identification number, location data, online identifier, or one of several special characteristics that express the physical, physiological, genetic, mental, commercial, cultural, or social identity of these natural persons. In practice, this includes all data that is or may be associated with a person in any way. Personal data includes things like a person's phone number, credit card number, or personnel number, as well as account data, license plate number, appearance descriptions, customer number, and address.
Because "any information" is included in the definition, one must infer that the word "personal data" should be construed as broadly as feasible. This is also implied by European Court of Justice case law, which recognizes less explicit information as personal data, such as work time recordings that contain information about the time when an employee clocks in and clocks out of work, as well as breaks or periods that do not fall within work time.
IP addresses can also be considered personal data if such addresses are shared with an organization. This is also personal data if the controller has the legal option of requiring the provider to supply extra information that allows them to identify the individual behind the IP address. It's also worth noting that personal data does not have to be objective. Personal data might include subjective information such as views, judgments, or estimations. As a result, an evaluation of a person's creditworthiness or an employer's appraisal of work performance falls within this category.
Last but not least, the legislation stipulates that material for a personnel reference must pertain to a living individual. In other words, information on legal entities such as businesses, foundations, and institutions is not protected by data protection laws. Protection for natural individuals, on the other hand, begins with legal competence and ends with it. In essence, a person gains this privilege at birth and maintains it until death. To be deemed personal, data must be assignable to named or identifiable living people.
The appropriate use of personal data enables us to detect patterns of misuse, such as discriminatory pricing for health insurance or commodities, and to take steps to avoid such activities, allowing citizens to benefit from their data.
From an organization’s standpoint, personal data can be used for many different things. Personal data allows organizational leaders to understand more about the behaviors and needs of their customers. Personal data can be used to stay ahead of the competition and to ensure that the products and services offered align with the needs of consumers.
There are clearly many reasons why personal data is important. In that same vein, personal data privacy is also important. Bad things may happen when material that should be kept secret and safe falls into the wrong (criminal) hands. A data breach at a federal or government organization, for example, may provide hostile access to top-secret material that could put citizens in danger. A data breach at a company might put confidential information in the hands of a rival. A school security breach might put kids' personal information in the hands of criminals who could utilize it for identity theft. PHI (i.e. personal health information under HIPPA) can also get into the wrong hands if a hospital or physician’s office suffers from a data breach.
There are a number of things organizations can do to protect personal data from criminals. Specifically, aligning your data privacy strategy with the GDPR is an excellent way to protect sensitive data.
To begin, promote awareness within your organization. Key employees and decision-makers at new firms and startups should be informed of the legislation so that they can comprehend the possible effect and identify areas that need to be addressed for compliance. Conducting and mandating security awareness training for all company employees is a great way to ensure that each person has been briefed on data protection best practices.
After that, conduct security and data audits. Accountable HQ can work with you to make this complicated process a whole lot easier. Keep track of what personal information you have, where it originated from, and with whom you share it. Another strategy to decrease instances of misused or at-risk data is to keep your privacy notice up to date. When you collect personal data, you'll almost certainly utilize a privacy notice that includes information like your identity and how you plan to use their data.
On top of all of this, your ultimate objective should be to keep your company safe as a whole. To keep cybercriminals out of your client's personal information, use firewalls, security protocols, and malware detection software.
Lastly, investing in the aid of a risk and compliance software company like Accountable HQ can make the process of protecting personal data much easier.
The growth in data breaches is mostly attributable to a succession of unprotected cloud databases, rather than data breaches themselves. In 2021, the overall number of cyber attack-related data compromises was up 27% compared to 2020. Phishing and ransomware remain, by far, the most common threat vectors. To prevent being a victim, it's critical to update your data privacy plan.
How was our guide to the value of personal data? Don’t forget to get in touch with Accountable HQ today to learn more about how our tools and team can help you achieve data compliance in your industry. | <urn:uuid:76f0f747-a6f2-4696-9286-3bd1bc4560b2> | CC-MAIN-2022-40 | https://www.accountablehq.com/page/why-is-personal-data-valuable | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00789.warc.gz | en | 0.942201 | 1,407 | 2.625 | 3 |
How Is Machine Learning And AI Used In Healthcare – Some Practical Examples
Machine learning and big data in the healthcare field has tremendous potential. Not only is this new technology improving diagnosis and treatment options, it also has the potential to help empower individuals to take control of their own health.
Some of the most exciting advances in healthcare today are coming about with the help of machine learning, AI, and advanced analytics. Advances in diagnostics, predictive healthcare, personalised medicine, and AI interfaces to help patients access healthcare all come down to the application of machine learning.
One team of doctors used advanced machine learning to analyse search queries online and discovered that they could identify people with pancreatic cancer — even before they received a diagnosis. The study focused on search queries that indicated someone had been diagnosed with pancreatic cancer, and then worked backwards to see if earlier queries could predict the diagnosis. While the study did not result in a practical application yet, there is the possibility that in the future, systems could be set up to warn a user to go get tested if search queries suggest a particular disease — especially one in which early detection is vital.
A Brazilian hospital, Estadual Getúlio Vargas, has only 22 ICU beds for a nearly unending stream of the city’s poorest of the poor. The hospital is using analytics insights to shorten length of stays for ICU patients to just over three days and reduce mortality rates for them by 21 percent. This means that the hospital can free up beds more quickly and serve nearly two more patients per ICU bed each month, improving efficiency and outcomes. Another hospital in São João is using a program called HVITAL, combining advanced analytics and machine learning to predict (and potentially prevent) up to 30 percent of ICU admissions, as much as seven days in advance.
One problem doctors face, especially with cancer patients looking at long treatment protocols, is keeping patients motivated and proactive during recovery. A new app called RehApp Coach has recently been developed to help solve that problem. The bot offers a conversational approach through machine learning and AI to engage patients during their rehab and hopefully keep them more motivated to continue.
Another important advancement is being made in matching children in the foster care system with the best potential foster families. The ECAP system (which stands for Every Child A Priority) uses a sophisticated matching algorithm to predict the best match between a child and a foster family, reducing the number of moves a child has to make and improving the potential for permanent placement. I include this under the healthcare banner, because the system has to adhere to the strict privacy regulations involved with health and other personal records. It’s saved the government agencies millions of dollars, but more importantly, improved outcomes for the most vulnerable children in their care.
Other companies are using machine learning to help predict and expose fraudulent healthcare claims, which costs providers millions of dollars a year and drives up the cost of healthcare for everyone. A company called KenSci was able to use machine learning to immediately identify more than a million dollars in fraudulent claims in a single dataset that had already been analysed and reviewed by 20 human claims specialists.
These are just a few of the most exciting advances I’ve seen reported recently using machine learning in the healthcare field, but I’d love to hear of other examples if you’re familiar with any. Please share them in the comments below. | <urn:uuid:fd7f8ab4-ce70-41af-a9a0-8881ce864ce7> | CC-MAIN-2022-40 | https://bernardmarr.com/how-is-machine-learning-and-ai-used-in-healthcare-some-practical-examples/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00789.warc.gz | en | 0.960966 | 694 | 2.921875 | 3 |
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a solution to a problem faced by marine biologists around the world.
Getting a closer look at ocean life can be a challenge. Conventional methods require boats, divers, and camera rigs. Together, these tend to disturb both sea creatures and their sensitive habitats, such as coral reefs.
The observer effect also applies: the creatures’ behaviour changes as a result of them being watched.
The solution is obvious: blend in, which is why MIT has developed a robot fish, SoFi, which moves just like a real one.
SoFi is made of silicon rubber. It has an undulating tail and can control its own buoyancy, swim in a straight line, turn and dive up or down, all controlled via a waterproof Super Nintendo controller.
“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” writes CSAIL PhD candidate Robert Katzschmann, lead author of a new article about the project published in Science Robotics.
“We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”
Exploring coral reefs without disturbing them
Swimming untethered has been a challenge for robots until now. In part, this is because using standard radio frequencies to communicate underwater is practically impossible. Instead, the SoFi system uses acoustic signals that allow divers to take control using a modified Nintendo remote from up to 70 feet away.
SoFi has had successful test dives at Fiji’s Rainbow Reef, where the robot managed depths of more than 50 feet for 40 minutes at a time. The robot fish was able to record high-res photos and videos using – appropriately enough – a fisheye lens.
“The authors show a number of technical achievements in fabrication, powering, and water resistance that allow the robot to move underwater without a tether,” says Cecilia Laschi, a professor of biorobotics at the Sant’Anna School of Advanced Studies in Pisa, Italy.
“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef, and because it can be better accepted by the marine species.”
Katzschmann has said that plans are already in the pipeline to improve SoFi. For example, the team wants to increase the fish’s speed by improving its pump system and improving the overall design.
They also want to add tracking algorithms to allow SoFi to follow real fish automatically using its onboard camera.
“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” says Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”
Internet of Business says
With the media’s coverage of robotics tending to focus on humanoid, industrial, transport, or aerial drone applications, marine robots are often overlooked, but in fact are a major area of development worldwide. For example, robots that move on or below the ocean waves play an important role in environmental, climate, or disaster monitoring, and have applications in offshore installation maintenance too. | <urn:uuid:89a10656-ac91-48d9-ad81-b4882dbaf60c> | CC-MAIN-2022-40 | https://internetofbusiness.com/mits-csail-aquatic-life-robot-fish/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00789.warc.gz | en | 0.940987 | 700 | 3.296875 | 3 |
How to prepare for a breach and protect accounts
Steps for how to prepare for a breach are relevant to regular users, not just business owners. A breach does not necessarily entail a massive leak from a company’s database. It can occur more personally, like having your internet traffic exposed due to unsecured connections.
Furthermore, users can also establish go-to remedies after realizing a service they use has suffered a hack. Thus, learning how to prepare for a breach means protecting data on a personal level and dealing with company-wide leaks.
What is a data breach, and how does it happen?
A data breach refers to a security incident when entities like businesses lose control of their assets. In 2021, data breaches affected nearly 6 billion accounts, and social media was the source of 41% of incidents.
- A data breach can happen in a company where you deliberately share your information. It could be streaming services, banks, retailers, or other businesses.
Hackers that invade corporate resources can steal data, either for personal use or with the intent to sell it. So, a hack might remain undetected if perpetrators do not put the data for sale. In some cases, attackers could publish information in underground forums for anyone to see.
So, most people associate data breaches with attacks, as in malicious parties hitting services directly. However, a data breach can refer to situations when companies fail to protect their clients’ data. Such data exposures can occur if a business leaves particular databases accessible.
What information can a data breach expose?
Data breaches have been incredibly harmful to businesses and users for years. Over time, perpetrators devised new strategies to reap data and combat improved defense strategies.
However, the main reason for their severity is the information they leak. And usually, highly sensitive information ends up compromised for basic security oversights.
Data breaches can reveal the following details about users and companies:
- First and last names.
- Email addresses.
- Social Security numbers.
- Birth dates.
- Banking information.
- Passport numbers.
- Home addresses.
- Credit card numbers.
- Phone numbers.
- Medical data.
- Driver’s license details.
Of course, businesses can significantly reduce the severity of data breaches if they use appropriate data security measures. Data hashing and encryption are common practices organizations use to secure data.
How can you prepare for a breach?
Most data breaches occur beyond users’ control. However, learning how to prepare for a breach means reducing the chances of financial losses and identity theft.
Take caution with the signup process
Think twice before disclosing too much personal information when signing up for a service. The more information companies have, the bigger fallout is possible in case of data breaches. So, reveal personal information only when necessary, like your home address for receiving ordered goods.
Additionally, you should know how apps or services deal with protecting your information. You can usually find such information in help centers or contact customer support to find out more.
For instance, Bolt, a popular ride-hailing app, does not use encryption to deal with clients’ credit card details. Instead, it generates tokens: random codes representing financial information.
Know what to do
We have listed some of the most common details that data breaches compromise. You should know the organization or entity to contact in each case.
The most time-sensitive details are financial information and passwords. So, call your bank hotline to work out your options as soon as possible. Of course, it might take time for an organization to detect a breach and inform its clients. Therefore, pay attention to your transaction history and look for any fraudulent charges.
When it comes to passwords, the first rule is changing the credentials of the affected service. However, it is also important to remember if you have reused the leaked password anywhere else. Password managers are helpful as you can see all credentials in one convenient place.
So, here is a brief summary:
- Remember to contact your bank as soon as you learn of a possible data leak.
- Look for fraudulent activity on your bank account.
- Keep credentials in password managers to have easy access to all combinations.
- Do not reuse passwords more than once.
React to attempts to log into your accounts
After a data breach, outsiders could try the leaked email-password combination on other targets’ services. Since many perpetrators dump all stolen data online, anyone can try to gain unauthorized access.
Luckily, many digital services send alerts about unusual login attempts. Once you receive these security notifications, change the password to close all active sessions.
Use two-factor authentication
You should enable two-factor authentication whenever this security option is possible. It adds a step to the login process. And even if hackers have the correct username-password combo, they won’t be able to conclude the authentication. It is because you hold the way to receive temporary tokens for 2FA.
Close old accounts
Old accounts can be a liability. You might register for a service, make one order, and forget about it until you receive a breach notification. If you reuse passwords, a data breach from an old login could expose credentials of more recent accounts. Plus, you might have linked old profiles to calendars, personal notes, or contacts.
You also might have used Signing in with Google or another popular service option. It could be that you no longer use most of these accounts linked to your Google profile.
Users can find these Google-linked profiles by opening Account -> Security -> Signing in with Google. Then, you could find old accounts you no longer use (and should delete).
Use account monitoring
Breach monitoring refers to services that scan the web to find whether your information has been compromised. For instance, Atlas VPN offers a Data Breach Monitor, continuously checking if your email address has been exposed.
It looks at publicly leaked databases and sends alerts if it detects new risks. So, you can quickly change your credentials or perform other actions to mitigate a data breach.
Encrypt internet traffic to protect data
Unsecured networks can facilitate data breaches for connected users. If a network is vulnerable, it could expose your actions to others.
Such data leaks could occur if you log into an unencrypted site. Then, snoopers could see the information exiting and reaching your device. In the worst case, they could hijack sessions and log in to services as you. Depending on the hacking method, perpetrators could obtain various types of data.
To prevent such network data breaches, we highly recommend installing a VPN. A Virtual Private Network fixes lack-of-encryption problems by scrambling all internet traffic. Thus, you can safely and confidently connect to any network! | <urn:uuid:01992989-eb9d-4ce5-ae2d-05419f127c55> | CC-MAIN-2022-40 | https://atlasvpn.com/blog/how-to-prepare-for-a-breach-and-protect-accounts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00789.warc.gz | en | 0.917739 | 1,455 | 2.625 | 3 |
Efforts to harmonize international copyright laws have been in progress since the late 19th century, but the rise of the digital medium has thrust the movement back into the limelight recently. More than any other medium, the Internet has taken the copyright and intellectual property environment and turned it on its head. Since the Web’s emergence in 1995, companies, governments and content creators have slowly adjusted, through lessons learned, innovation and legislation.
An understanding of copyright law may be the last thing business professionals have on their minds, yet it is crucial that purveyors of content and media know the basics, at the very least. To help provide a starting point from which one can explore the complex arena of copyright law, TechNewsWorld spoke to Rob Kasunic, principal legal advisor to the United States Copyright Office.
TechNewsWorld: How does the U.S. Copyright office educate the public on copyright and the online world?
Rob Kasunic: The Copyright Office spends a great deal of time on public information and education. In many ways, the centerpiece of our public outreach is the Copyright Office’s Web site. The office posts all its new developments on the site, such as testimony by the register before Congress, new proposed legislation affecting copyright and regulatory information. Even some of the rule makings that the Office conducts are largely administered through electronic submission of comments over the Internet.
The Copyright Office Web site also contains vast amounts of information relating to copyright, including digital copyright issues, in its circulars, brochures, FAQs, fact sheets and reports. In addition, all copyright registrations and records since 1978 may be searched online through our site.
The Public Information Office (PIO) of the Copyright Office is another key component of the dissemination of information. Last year, the PIO answered over 118, 000 telephone inquiries, 42,000 e-mail inquiries and 13,000 letters. In addition to these direct inquiries, the register and her senior staff spoke at more than 40 symposia, conferences and workshops on various aspects of copyright law. A significant portion of these speeches addressed copyright issues posed by digital content, the Internet and current technology. To increase our ability to reach more areas of the country, we have also begun to use video conferencing to interact with university and library groups around the country.
TNW: Are there government statistics on piracy, from a content or literature and audio/video perspective?
Kasunic: The primary role of the Copyright Office is to administer and sustain aneffective national copyright system, including registration of copyrights, deposit of copies of works published in the U.S. and recording of documents concerning copyrighted works. The Copyright Office does not have statutory authority to police or enforce copyright infringement and does not independently assess statistics relating to infringement.
The Copyright Office also assists the United States Trade Representative in its annual “Special 301” review of copyright piracy and market-access problems around the world. This process assesses the piracy levels of U.S. works in foreign countries and, in the most serious situations, may result in trade sanctions being imposed. Further information on international piracy from the USTR is available online.
TNW: Does enforcement reassure those who create works and therefore promote creativity? Do artists and writers, knowing their works are protected, produce more frequently?
Kasunic: The entire purpose of copyright protection in the United States is to provide exclusive rights to authors in order to encourage them to create new works. The encouragement of creativity benefits the public through the dissemination of a rich array of copyrighted works. Anything that undermines the incentive for authors, artists and creators to create and widely disseminate works to the public adversely affects creators and ultimately the public.
While there are some criminal and international trade sanctions for copyright infringement that can be brought by the U.S. government in appropriate circumstances, most enforcement of copyright infringement is the responsibility of copyright owners themselves. Enforcement of copyright owners’ rights is, at times, necessary to maintain the economic incentive to create, yet enforcement is often a long and expensive process.
Without question, civil or criminal enforcement of copyright infringement serves to reassure creators to some extent, but in the digital environment, where massive infringement is so easy to accomplish with the click of a mouse, enforcement alone is seldom enough to reassure creators. Adequate legal and technological protection for copyrighted works is an important component to reassure creators in the digital environment.
TNW: What is the rule of thumb for using content from other copyright owners when republishing online? Does the Copyright office have basic recommendations?
Kasunic: Unless there is a particular exemption in the Copyright Act for theparticular use intended, get permission.
While there are many discussions these days about the allegedly diminishing nature of fair use in the digital environment, many use the term “fair use” very loosely. Fair use is a case-by-case determination of whether a particular use is reasonable under the circumstances, such that a reasonable copyright owner would not require permission for the use.
In making a fair use determination for each particular use of a copyrighted work, there are four factors that must be considered (see section 107 in the Copyright Act).
One of the most important factors to consider is the effect of the use on the potential market for the copyrighted work. In the digital environment, the potential effect of posting a work on the Internet can be to completely destroy the market for the work. Therefore, aside from certain very limited uses, such as a short quotation, a criticism, comment or parody of a work, of some other very limited, reasonable use, obtaining permission is the only safe course of action.
The Internet actually makes obtaining permission easier in certain circumstances — through, for example, the Copyright Clearance Center, ASCAP, BMI, SESAC or the Harry Fox Agency. It should also be understood that even if a person believes his or her intended use is a fair use, if the copyright owner disagrees with the assessment, copyright infringement litigation is one way the dispute might be resolved. It generally makes sense to obtainpermission before using or republishing someone else’s work online.
TNW: Copyright laws are being harmonized to a certain extent in Europe based on EU directives. Will there be an effort in the future to align copyright laws more broadly on an international level for easier enforcement and to encourage the presentation of work across borders by content producers?
Kasunic: Efforts to harmonize copyright laws internationally have been going onfor decades, starting with the Berne Convention in 1886, and culminating with three important international agreements in the 1990s — The World Trade Organization’s TRIPS Agreement in 1994, the World Intellectual Property Organization’s Copyright Treaty in 1996 and the WIPO Performances and Phonograms Treaty and the WIPO “Internet” Treaties in 1996.
The EU directives are a good example of regional harmonization of copyright, as are recent U.S. Free Trade Agreements that contain copyright provisions. The WIPO treaties are designed to address the Internet and new digital technologies, and are based on the recognition that, in the new environment, harmonization is more important given the ease with which copies can be disseminated across borders and throughout the world.
Copyright remains, however, territorial protection whereby each country’s laws apply to conduct within its borders, which preserves notions of national sovereignty and some flexibility for countries to address local conditions in their national laws, consistent with minimum international standards. Marketplace solutions, such as reciprocal collective licensing, can also “harmonize” different copyright systems and help copyright ownersenforce and exploit their copyrights globally.
TNW: Can you identify courses of action professionals could take if theybelieve works have been used in violation of their copyright?
Kasunic: Regarding online protection and enforcement, as an initial matter, sincecopying and distribution is so much easier online, creators should take reasonable precautions before infringement occurs. In the U.S., registration of copyright provides some important advantages if accomplished before an infringement (or within three months of publication). Registration offers the copyright owner the availability of statutory damages and the potential for recovery of attorneys’ fees and costs. Both statutory damages and potential recovery of attorneys’ fees can be important advantages when considering private enforcement of rights.
Creators may also want to consider the use of technological protection measures or the inclusion of copyright management information in works distributed online. Many off-the-shelf products provide the ability to, for example, limit copying or to watermark copies of a work. Such technological protections impede unauthorized reproduction and distribution. The use of such measures provides additional legal protections to copyright owners if online copies are hacked under U.S. legal code. Taking adequate precautions before infringement may avoid the necessity for enforcement or may makeremedial action after an infringement occurs somewhat easier.
After infringement has been identified, enforcement of rights is generally accomplished through private action by or on behalf of the copyright owner. While some organizations or government agencies may assist in enforcement in certain circumstances, the vast majority of enforcement responsibilities fall to the copyright owner. In some situations, the infringing activity can be stopped by contacting the infringer with a cease and desist letter, or by notifying the designated agent of the online service provider of the infringer that particular material is infringing and should be removed from the Internet. If these actions fail to stop the infringing activity, generally, private enforcement through litigation is the principal means of relief.
International Copyright Resources | <urn:uuid:24b1e99e-3680-43d0-b8da-c2711317bd61> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/the-us-copyright-offices-rob-kasunic-on-internet-law-40268.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00789.warc.gz | en | 0.923969 | 1,938 | 2.546875 | 3 |
NEWSBYTE With hurricane season fast approaching in the US, having access to smart- or mobile phone services following a disaster could mean the difference between life and death, for both citizens and first responders.
According to a report by the Federal Communications Commission (FCC) in the US, 90 percent of all cellphone stations and masts in Puerto Rico were knocked out after Hurricane Maria hit the region in September last year, leaving millions of people without communications. In November, nearly 48 percent of all connections were still down.
Verizon and AT&T are two telecoms providers developing the concept of ‘cell-drones as service’, with the aim of getting temporary cell coverage into the sky after natural disasters, opening their networks to all users.
Shortly after Hurricane Maria, the FCC granted AT&T permission to use its Cell On Wings (aka Flying COW) drones to restore cellular service to the area.
Last year, each Flying COW provided wireless connectivity to customers across an area of up to 40-square miles, flying 200 feet above the ground and extending coverage further than other temporary cell installations.
“We would provide our Flying COW to the first responders, to, say the fire department, and we would pilot it for them,” said AT&T drone programme director, Art Pregler. “All it takes is for them to place a phone call, email, or contact us and we’ll provide that service.”
AT&T also offers land-based COWs: in this case, cell on wheels technology.
Verizon has now added its weight to the concept, with the testing of a new 200-pound fuelled drone in Cape May County, New Jersey. As with AT&T’s Flying COWs, Verizon’s drones act as airborne cell sites, with each providing a 4G LTE signal over a one-mile range.
“The ability to bring coverage to an area that had none really quickly is something that emergency responders are all over,” said Verizon network VP, Michael Haberman.
The Verizon drones will be available to use in the case of a natural disaster later this year, he added.
Internet of Business says
A transformative interim solution for communities that find themselves cut off from the world in the wake of natural or other disasters. | <urn:uuid:52467936-13a1-4c8e-b53a-091f79aeee35> | CC-MAIN-2022-40 | https://internetofbusiness.com/from-cellphone-to-cell-drone-verizon-att-networks-take-flight/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00189.warc.gz | en | 0.955507 | 483 | 2.9375 | 3 |
Reverse DNS lookups identify what IP address is associated with a particular domain or hostname. This works as the name implies—opposite of forward DNS lookups. Rather than a domain name being returned from a standard DNS query, the IP address is resolved.
Common Use Cases for Reverse DNS Lookups
Reverse DNS lookups are most commonly used by email servers in order to verify the validity of servers they are receiving messages from. The PTR records that are necessary for reverse DNS tell other mail servers that the IP of your mail server is authoritative for sending and receiving mail for your domain.
In this process, the IP owner will provide you with a zone for your mail server’s IP address, which is a special reverse DNS domain that ends in “in-addr.arpa”. The numbers that precede the “in-addr.arpa” are your IP block with the octets reversed.
Example: The reverse DNS for the 192.168.1 class C would be “1.168.192.in-addr.arpa”. In this example, this reverse DNS zone would handle the reverse DNS for IPs 192.168.1.0 to 192.168.1.255.
If the IP block is smaller than a class C, the zone might be “27/1.168.192.in-addr.arpa” or “0-126.96.36.199.in-addr.arpa”. The difference is the syntax.
- Your Constellix account has been created
- You have obtained your IP’s reverse DNS zone from your IP provider
Note: This is usually your ISP or hosting provider, but you can utilize the Whois Lookup tool to determine the owner of an IP address.
- You have requested delegation of your reverse DNS to Constellix nameservers from your ISP and were provided with the necessary reverse DNS domain information
Note: Typically, ISP or hosting companies will require 254 IPs (a full class C) or more to delegate the reverse DNS. If for some reason your provider cannot delegate the reverse DNS to Constellix, request that they establish reverse DNS for your domain and host the related PTR records.
Create a Reverse DNS Domain
Once logged into your Constellix DNS dashboard, the following steps will guide you through the process of creating a reverse DNS domain with the information previously provided by the ISP.
1. Select Add Domain Option
From the upper right section of the Constellix DNS dashboard, click on the Add Domain button.
2. Enter Zone Information
Enter the zone that was provided by the ISP (or owner of the mail server’s IP block) and click on the Save button.
Important: When creating the domain in Constellix, use the same syntax the ISP or hosting provider used to delegate it.
For full class C IP blocks, the syntax of delegation that should be entered is 147.94.208.in-addr.arpa.
Note: If your reverse DNS domain is not yet configured within Constellix, the nameservers you provide for delegation may be different.
3. Note the Assigned Nameservers
After adding your reverse DNS domain into the Constellix system you are provided with a list of nameservers that your reverse zone is assigned. Click on the Close button.
Note: These must match the Constellix nameservers for which you requested delegation.
Add a PTR Record for the Reverse DNS Domain
Once the reverse DNS domain is set up, a PTR record will need to be created. For assistance with this step, see our Create a PTR Record tutorial.
After the PTR Record has been created, the reverse DNS setup should be complete.
Note: Most mail servers don’t verify where the PTR points. Their purpose is to simply check that the ISP has delegated the reverse DNS to your provider and that you have a PTR record for your delegated zone with the name of your IP address.
When a mail server performs a reverse DNS lookup, it will look for the following three requirements:
- The forward DNS must match the reverse DNS.
- The reverse DNS must resolve to the mail server’s IP address.
- The reverse DNS must match the fully qualified domain name (FQDN) of the email header.
Visit our website for more information on our services and features. | <urn:uuid:17e62638-6609-40aa-8388-13fe31a66844> | CC-MAIN-2022-40 | https://support.constellix.com/support/solutions/articles/47000947914-reverse-dns | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00189.warc.gz | en | 0.884256 | 932 | 2.9375 | 3 |
If there is one area directly influenced by the rise of data science, it's statistics. The relationship between the two fields is hard to describe, as the former lacks a complete and clear definition while the latter covers a very broad range of topics. At the same time, the two play off one another in unique and distinguished ways. It's enough that some experts, such as the former head of the American Statisticians Association, consider both to mean the same thing. However, as data science continues to evolve thanks to predictive modeling software, it's helping statistics transform in unique and exciting ways.
Old concepts get new life
The foundation of statistics is probability. Various concepts exist that underpin its existence, such as confidence intervals, variance levels and framed models for indexing and sorting. Data science's recent advances upend many of these considerable factors, according to Data Science Central. Consider random number generation, one of the primary mechanisms to simulate probability in any given situation. Its usage spans far and wide in a multitude of industries, from picking jurors in law to securing encrypted materials to determining how a video game's artificial intelligence functions. Data science made great strides in improving RNG, namely in the use of creating incredibly accurate irrational numbers such as Pi or the square root of two.
There are other statistical concepts that gained strength thanks to data science. For example, confidence intervals, the bedrock aspect of determining the accuracy of statistics, gains help from analytics by creating schematics that don't need models, thus mitigating the need for p-value and asymptotic analysis. Metatags help create clusters for assessment far faster than standard statistical indexation methods, with a higher degree of scalability. Finally, there are better data visualization techniques available to provide an understanding of current events.
The thin barrier
With these major changes in how statistics function, some figures see data science as the successor if not replacement to statistics. However, this is a little misleading, if only because of the permeable line that separates the two fields. As data scientist Tommy Jones noted in Amstat News, data science is distinct from statistics in part because the former has multiple aspects to it, primarily data management and advanced visualization to supply an understandable narrative. There are also computer science principles in place such as understanding of database languages like Python and Hadoop. This isn't to say statistics doesn't have an important in the field of data science, nor will it benefit by the latter's continued development. However, the relationship between the two will remain influential, not symbiotic, for the time being. | <urn:uuid:c907aa49-1777-47aa-9e2f-2e28e64aedd9> | CC-MAIN-2022-40 | https://avianaglobal.com/data-science-continues-to-transform-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00189.warc.gz | en | 0.946087 | 518 | 3.015625 | 3 |
History of Leros
Leros has had a complicated history.
The Dodecanese islands were captured and controlled
by Persia in the early 5th century BCE,
absorbed into the Roman Empire which divided and evolved
into the Byzantine Empire,
seized by the city states of Venice and Genoa,
then captured and absorbed into the Ottoman Empire,
captured by the Kingdom of Italy which itself then came to be
controlled by the Fascist regime in 1922,
taken over by Nazi Germany in 1943,
and finally unified with the Republic of Greece in 1948.
I was on my way to see sites from the 1930s and 1940s, surrounded by signs of the earlier stages of history. I would walk through most of the main settlement on Leros, from the tourist area of Alinda to the east-coast ferry port of Agia Marina, uphill to the administrative center of Platanos on a saddle between two bays, along a ridge above the Pandeli beach area, and then down to Lakki, the freshly designed port town built on the west coast by the Italian occupiers in the 1930s. The war museum is in Italian tunnels beyond the ferry pier on the north shore of Lakki bay.
Homer's Iliad and Odyssey were written down in the 8th century BCE. They tell of the Trojan War, now dated to the period of about 1260–1180 BCE. The Iliad includes Leros in its list of Aegean islands that participated with naval forces.
The Dodecanese islands were captured by Persia around 499 BCE. The Athenians defeated Persia in 478 and the islands joined the Delian League, based on Delos at the center of the Cyclades.
Leros supported Athens in the Peloponnesian War against Sparta in 431–404 BCE.
After Alexander the Great died in 323 BCE, the Dodecanese islands were divided among Alexander's generals. That led to them becoming part of the Roman Empire, and then the Eastern or Byzantine Empire.
After the western European armies besieged and looted Constantinople during the Fourth Crusade in 1204 CE, Venice and Genoa seized control of the region.
The Order of the Knights of the Hospital of Saint John of Jerusalem captured Leros in 1309. They built fortifications including expansion of the castle on the peak overlooking the main settlement.
The Ottomans captured Constantinople in 1453, renaming it İstanbul. The Ottoman admiral Kemal Reis sent a fleet of seventeen warships to Leros in 1505 but failed to capture it. In 1508 he tried again with a larger fleet, but that also failed.
The Ottomans captured Rhodes in 1522. Sultan Suleiman and the Grand Lord of the Knights of Saint John signed a treaty in which all of the Order's possessions in the Aegean were turned over to the Ottoman Empire. That began several centuries of Turkish rule. Leros, like several other islands, had a privileged status of partial autonomy.
Italy went to war with the Ottoman Empire in 1911–1912 and occupied all of the Dodecanese.
Things were changing rapidly. The Italo-Turkish of 1911–1912 saw Italy carry out the world's first aerial reconnaissance mission and the first aerial bomb drop. The Turks used rifles to become the first military to shoot down an aircraft. Guglielmo Marconi joined the Italian Corps of Engineers in Libya to establish a network of wireless telegraphy stations.
Italy had agreed to return the Dodecanese to the Ottoman Empire in the 1912 Treaty of Ouchy. However, the treaty was vague, and then the Italo-Turkish War was quickly followed by the Balkan Wars of 1912–1913, and then World War I started in 1914.
The British used Leros as a naval base during World War I, but then Italy resumed control.
World War I was immediately followed by the Greco-Turkish War of 1919–1922. The western Allies, especially the British, had promised large territorial gains to Greece. But they didn't provide support, and the Greek attempt to capture Thrace and western Anatolia failed. The Allies backed out of the 1920 Treaty of Sèvres, and set up the 1923 Treaty of Lausanne. It divided up the Ottoman Empire, although not as harshly for the Republic of Turkey as the 1920 treaty did. It also gave the Dodecanese to Italy.
The Fascist regime that had taken control of the Kingdom of Italy in 1922 set out to force the Dodecanese to become culturally Italian, with limited success.
Italy built a new town and naval base that they named Porto Lago. That bay in southwestern Leros, with a lighthouse marked at the southern point of its opening, is one of the largest deepwater harbors in the Aegean.
By 1936 about 7,500 Italians lived on Leros, making it the only island of the Italian-occupied Dodecanese where the Italian population outnumbered the natives. Mussolini called Leros "the Corregidor of the Mediterranean." He saw Leros as the critical base for Italian control of the eastern Mediterranean, and had a holiday villa built for him near the naval base.
Italy entered World War II on the side of Nazi Germany in 1940. In September 1943 Italy surrendered to the Allies, and Germany took over the Italian possessions in the Aegean. The Allies had captured all of Italy's territory in Africa by that time, including today's Libya, Eritrea, Ethiopia, and Somalia.
Many young men from Leros and other Dodecanese islands escaped by boat to the nearby Turkish coast and then made their way overland to British-controlled Palestine and Egypt to join various Allied units.
The British administered many of the Aegean islands after the war, hoping that no one would notice if they just held on to them and made them part of an expanded British Empire. But in 1948 the Dodecanese islands joined the Republic of Greece.
Starting in AlindaGuesthouses at Booking.com
Here is the morning view off my balcony to the 250–300 meter peaks to the north. I was staying at Aparthotel Papafotis in Alinda.
Parisis Belleni built a vacation house in the form of a castle along the waterfront road running around the bay. It's a two-story building with two three-story towers, now called the Belleni Tower.
He only finished building it in 1925, the year that he died. It was transformed into a hospital, and later into a local history museum. Unfortunately the COVID-19 pandemic had it shut down during my visit in early October 2021.
The Leros War Cemetery
The Leros War Cemetery is along the waterfront road between Alinda and Agia Marina.
British Commonwealth troops killed during World War II are buried here.
Many of the markers are for men who were in the Long Range Desert Group. That was a British-organized unit active between 1940 and 1943 in the Western Desert, the Sahara in Egypt and Libya. All the members, never more than 350, were volunteers. The majority of the men were from New Zealand, with volunteers from Southern Rhodesia and British forces joining the group later.
The last Axis forces in North Africa surrendered in Tunisia in May 1943. The British military command then reassigned the L.R.D.G. to operations in the Dodecanese island chain, in the Balkans, and in Italy. They were immediately sent to Lebanon to retrain in mountain warfare.
Then the L.R.D.G. was sent to Leros.
Leros, World Wars, and "Churchill's Follies"Jump ahead if you don't want to read the historical background and are anxious to see the pictures
During the First World War the Allies wanted to take control of the combined Bosphorus and Dardanelles, two straits joined by the Sea of Marmara to connect the Black Sea to the Mediterranean. Winston Churchill was First Lord of the Admiralty, and he proposed a naval attack on the Dardanelles.
Allied attempts to force their way into and through the Dardanelles strait failed. The Ottomans had mined the straits, and had mobile artillery shelling the Allied mine-sweeping vessels. The Allies sent their fleet of obsolete ships unsuited to face the main German fleet on the open sea. They struck mines and were hit by Ottoman shelling, and the Allies lost several ships.
The Allied plan shifted to capturing the land, especially the narrow peninsula forming the western side of the strait, and then eliminating the Ottoman mobile artillery.
The Allies were overconfident. Who dares challenge the mighty British Empire? On top of that, planning was slipshod. Some of the details were based on Egyptian travel guides.Visiting
I've been to what came to be called "ANZAC Cove" at Gallipoli where the troops from Australia and New Zealand landed. I initially had a hard time figuring out what I was seeing, because it was an absurd place for a landing. It's a very small beach at the base of a high steep slope. I was asking "But where is the beach that the army landed on?" The ANZAC troops took heavy losses.
Meanwhile, Churchill had long-range plans to turn the eastern Mediterranean, Aegean, and Black Sea into British lakes. If the Gallipoli campaign was successful, the British Empire could end up in control of the Dardanelles and Bosphorus after the war. It would be like another Suez Canal but without the hassle of all that digging.
Yes, they really used Egyptian travel brochures:
Churchill's dream didn't come true. Australia and New Zealand now see the episode as when they truly became independent countries. As for Turkey, the war led to the rise of one of its generals who adopted the name Mustafa Kemal Atatürk and went on to form the Turkish Republic.
Jumping from the First to the Second World War...
During the 1930s the Italians installed guns on the mountain peaks and near sea level at the mouths of bays. Many of these were surplus ex-British naval guns, including the 6-inch or 152 mm Armstrong Model 1891, an 152 L/40, with a range of almost six kilometers. "152 L/40" meaning 152 mm bore diameter and 40 caliber, with a barrel length 40 times the bore diameter or just over six meters long.
So, when Italy joined Nazi Germany in 1940, the British Royal Air Force began bombing Leros. These were long-range missions as the nearest Allied air bases were on Cyprus and in north Africa, 330 to 360 nautical miles away.
Then when Italy surrendered in September 1943, British forces arrived in the Dodecanese. Churchill's plan from the First World War had returned and grown. He hoped to make the Aegean, starting with the Dodecanese islands along its eastern edge, a British lake protecting the approach to what would become the British waterway of the Dardanelles and Bosphorus straits. Then Russia could be supplied via that route instead of the Arctic convoys to Murmansk and Arkhangelsk. Then Britain would retain control of all those waterways after the war. Or so Churchill hoped.
However, the Germans rushed forces from mainland Greece to several of the islands, including Rhodes, the main objective for both sides given its size and three military airfields. They forced the Italian forces on Rhodes to surrender to the Germans and not the British on 11 September 1943.
Over the next week the Allies took control of several of the Dodecanese islands from Kos through Samos.
Kos was the only airfield held by the Allies in the islands. The Germans bombed it and other Allied positions on Kos. After about two weeks, German amphibious and airborne landings on 3 October led to a British surrender the next day. In the Massacre of Kos that same day, German troops killed the captured Italian commander of Kos and about 100 of his officers.
Now Germany had a nearby airfield and stepped up their bombing of Leros. Leros quickly became the second-most bombed island in the European theatre, after the enormously larger and more target-rich Crete.
On 26 September, just two and a half weeks after Italy's surrender, German Ju 87 and Ju 88 dive-bombers sank the Greek destroyer Vasilissa Olga and the English destroyer HMS Intrepid in the Italian-built harbor at Porto Lago, now Lakki. The Ju 87 and Ju 88 raids on Leros continued for the next 52 days. They destroyed many of the coastal defense gun positions and port installations.
This was the autumn of 1943. Eisenhower, Marshall, and Roosevelt weren't interested in assisting what they saw as mostly setting up post-war imperial benefits for Britain. Besides, they were in the middle of the Italian campaign, having invaded Sicily in July 1943 and in September advancing onto the Italian peninsula. The Italian Fascist regime had collapsed and Mussolini had been deposed and arrested in late July, but German forces had taken control of central and northern Italy. Churchill was on his own in the Dodecanese.
The British had no air defense as Leros was way beyond Allied fighter range. The long-range P-38 Lightning fighters had been shifted to where they were needed in the Italian theatre. In order to assault German ships or land positions, British attack or bomber aircraft would have to get through or around the German defensive fighters based on Kos.
The Long Range Desert Group was on the islands, especially Leros, where British command was using them as normal infantry.
In September through November 1943, Germany took control of Leros and then held it through the end of the war.
The nominally British forces suffered about 600 killed and 100 wounded, with 3,200 taken prisoner. The Royal Navy lost several ships.
Britain hadn't had a large reverse like this since the summer of 1942. This was the last time that German forces captured and occupied foreign territory. Even so, Germany treated the Dodecanese as a backwater, quickly pulling troops out and replacing them with a garrison of older troops plus former concentration camp and prison guards.
The British assignment of the L.R.D.G. to the Dodecanese Campaign was a breach of agreement with the New Zealand government. Gallipoli had been a formative experience for both New Zealand and Australia, leading to the emergence of unique national identities. Now the New Zealand government informed Britain that it would no longer allow Britain to command its military forces.
The book The Long Range Desert Group in the Aegean details the breach of agreement and the following government confrontation. The book is good, one of the few histories of the Dodecanese Campaign, and exhaustively detailed. How many rounds of .45 ammunition and how many of .303 were taken on each mission, and so on.
It also describes how Churchhill ordered the suppression of all discussion of the campaign in Parliament. From page 153 and following:
A few days after the fall of Leros, the P.M.
recommended that the Foreign Secretary adopt an evasive policy
It was not advisable to reflect in detail on such questions as to why the lessons of Crete in 1941 had not been learned."
Coffee in Agia Marina
One of the local fishermen still had fish to sell when I arrived at the ferry pier area in Agia Marina. I got coffee and koulouri, a sesame seed bread ring, with a view of the harbor.
A sign near the pier shows underwater relics of World War II around the Leros coast.
The Βασίλισσα Όλγα or Queen Olga was the Greek destroyer the German air attacks sank in September 1943, with the loss of 72 men. She had been built in Great Britain for the Royal Hellenic Navy before the war, and had assisted in evacuating the government from the mainland to Crete in April 1941, and from Crete to Egypt the following month.
Another of the four marked sunken ships is the British HMS Intrepid.
The Italian naval and seaplane base was headquartered near Lepida, a short distance along the south shore from Porto Lago, now called Lakki.
Over the Saddle to Lakki
Platanos is the administrative center of Leros. It's on the saddle, the high point between the east and west shores of the main settlement.
The Pandeli castle is on what probably was the main akropolis or high settlement of the island before history was recorded here. It could have been a fortified refuge, and there could have been a sacred site here.
We know that a fortification was built there during Byzantine rule, probably in the 10th or 11th century CE. That small castle lives on as the innermost part of the fortifications.
A second perimeter of defensive walls was added later, greatly extending the enclosed area but not improving much on defensive architecture. That expansion was mentioned in documents of 1087 and 1088 associated with Byzantine Emperor Alexios I granting parts of Leros to the newly-founded Monastery of Saint John the Theologian on Patmos.
After the Fourth Crusade in 1204 broke up the Byzantine Empire, Leros and the castle first fell into the hands of the city-state of Genoa, and then the city-state of Venice.
The Order of the Knights of the Hospital of Saint John of Jerusalem captured Leros in 1309. In the 15th century they further extended the castle with a third layer of walls, with a design more suited to the military technology of the time. By this time there were large underground food storage areas and water tanks to withstand a siege. The greater enclosed area was probably intended to provide refuge for the island's entire population.
The Ottoman admiral Kemal Reis besieged Leros with three galleys and seventeen warships in 1505, but was unable to capture any territory. He returned in 1508 with more ships but was similarly unsuccessful.
Leros stood but Rhodes fell. In 1522 Sultan Suleiman the Magnificent had besieged Rhodes and then forced Philippe Villiers de l'Isle-Adamand, the Grand Lord of the Order of the Knights of the Hospital of Saint John of Jerusalem, to sign a treaty surrendering all the Order's possessions in the Aegean to the Ottomans.
Leros and some other islands had partial autonomy under Ottoman rule. For a brief period during the Cretan War between Venice and the Ottoman Empire in 1645–1669, and for three years after the Greek Revolution of 1821, Leros was temporarily liberated from Ottoman rule.
In 1912 Italy took control of the Dodecanese islands.
The Fascist regime that took control of Italy in 1922 decided to "Italianize" the Dodecanese. The Italian language would be taught in schools, and there would be incentives for acquiring Italian citizenship.
The area at the head of the bay had been a swamp. In the 1930s Italy built the town of Porto Lago there. It was renamed Lakki when Leros and the rest of the Dodecanese were transferred to Greece in 1948.
The architectural craze for 1930s Fascism was Rationalism, and Porto Lago's overall design followed that for public and residential buildings. There never were many examples of Rationalist towns, and Lakki is the best surviving example.
The Italian population of Leros reached about 7,500 in 1936, making it the only majority-Italian island in the Dodecanese.
The two wasn't laid out on a rectangular grid. Streets radiate out from the waterfront, intersecting curving cross streets.
The bay is deep, 3.5 km long, and 1 km wide, making it an excellent location for a naval base. Italy made it the home port for two destroyers, two torpedo boats, and four submarines. It was also a seaplane base.
Submarine nets could close the opening. Anti-ship artillery positions were on the peaks around the bay's opening.
Below is the view from the waterfront at the center of town out through the bay's opening into the Aegean.
Rationalist public buildings line the waterfront.
Military use today is limited to Hellenic Coast Guard vessels like this one.
Onward to the tunnels and bunkers!
There are bunkers and tunnels underneath many areas of Leros. They are especially numerous around the former naval base. The Merikia area on the north shore of the bay includes a museum in a restored underground complex.
Tunnel openings look like the below, although most aren't painted in high-visibility white. The general design includes two opposing half-width walls. To enter you must go to the left of one and then to the right of the other.
I later asked a girl working at the place where I stayed: Are local people interested in these? Does anyone explore them?
Oh yes. Tunnel exploration is probably a standard rite of passage for high school kids on Leros. As it seems to have been for her.
As she explained, of course you wouldn't build an underground bunker with only one entrance. People would be trapped if the single entrance was covered or destroyed.
So, any tunnel going into a hill connected to at least one tunnel exiting at a distant point.
Ruined administration buildings demonstrate why the Italian operations centers were in underground bunkers.
Merikia War Museum
The war museum is back in one of these tunnels closed by a heavy metal door. It's only €3 to visit, and there is a lot to see.
An Italian map is near the start. It shows where a submarine net closed the Alinda bay.
This map indicates gun sites as:
Square = anti-ship battery
Triangle = anti-aircraft battery
Circle = anti-ship and anti-aircraft battery
Today's maps just show what remains today, marking the artillery installations as "ruins". No indications of their specific purpose or other details.
A typical coastal defense battery had three 152mm L/40 and one 102mm L/35 gun.
A typical anti-aircraft battery had six 76mm L/40 guns.
The total for the thirteen coastal batteries was nineteen 152 mm, five 102 mm, and twenty 76 mm guns. The twelve anti-aircraft and dual-purpose batteries had fourteen 102 mm, six 90 mm, and twenty-eight 76 mm guns.
They have a flyer dropped by German planes onto Leros, and its translation into Greek and English.
To all the Italian Officers, Warrant Officers, and soldiers.
For the last time we invite you to surrender to the German armed forces.
After the 12th of October 1943 all the Commanders and Officers who have not given the orders to the troops to surrender and deliver the weapons, will be shot as soon as they are taken prisoner.
The soldier who surrenders will be immediately taken elsewhere.
All the others will be attacked by the German armed forces and will be annihilated.
The German Command
The museum contains a wide range of objects. Some were from the tunnels, possibly these specific tunnels now occupied by the museum. Others came from around the island and the nearby water. This cabinet contains medical instruments.
There are plenty of canteens and mess kits.
Local people donated objects like this typewriter from Nikolao Frantzi.
Many of the objects aren't locked in glass cases but are simply arranged on tables, like these helmets and knapsacks.
Some of the objects were donated by the Greek military.
The above telephone set may have been of World War II vintage, but some of the nearby radio gear clearly isn't.
The RT-323A / VRC-24 was an early 1960s U.S. system for vehicle-based ground-to-air communication. It operates AM voice at 225 to 400 MHz. Its 29-vacuum-tube design required 250 watts of 24 VDC power when receiving, 300 when transmitting.
The Hughes-built AN/PRC-74 was first delivered for U.S. Army use in 1966. It operates both CW and SSB on 2 through 12 MHz with 12–18 watts of output power. It was initially issued to MACV-SOG, then widely used by U.S. Special Forces in Vietnam.
The alarming part was the collection of blocks of corroding ammunition that had been pulled out of the sea. It's probably stable. But I wasn't going to kick it.
And fuel cans and more.
"The Guns of Navarone"
Not exactly. "Inspired by" and "set in the context of" the Dodecanese Campaign are reasonable descriptions.
The novel and movie are set in 1943 with 2,000 British soldiers marooned on the island of Kheros, about to be assaulted by an overwhelming German force. The Royal Navy can't evacuate the British troops because of the need to traverse a strait protected by radar-directed large guns on the island of Navarone.
There is an island of Κέρος or Keros, about 60 nautical miles to the west, just southeast of Naxos. But it isn't "just off the Turkish coast", a point crucial to the story, and there is no island of Navarone. There's a brief shot of a map showing the fictional Navarone early in the movie:
The fictional "Navarone" seems to be about the size of Rhodes, located where Nisiros and Tilos are.
The movie came out in 1961, during the glory days of matte painting effects in movies.
There is at least an "inspired by" resemblance to Agia Marina and the peak with the castle.
Returning to Agia Marina
After the museum and another pass through the Rationalist architecture of Lakki, I headed back toward Agia Marina and Alinda.
This is the main street heading down to Agia Marina and the Alinda bay.
It was an 8.2 kilometer hike in each direction. So I stopped along the waterfront in Agia Marina and had a drink before finishing the return trip.
I got dinner that evening at a taverna along the waterfront road near the place I was staying. Chicken souvlaki with salad, pita, and french-fried potatoes, with a half-liter of red wine.
Or, Continue Through Greece: | <urn:uuid:10e49ac9-16f5-46f3-8e67-808bfbe3ecce> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/greece/leros/world-war-ii.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00189.warc.gz | en | 0.972221 | 5,554 | 3.625 | 4 |
The Pope Relocates to AvignonAvignon Corrupt
We're visiting Avignon, and we just saw how corrupt the medieval Papacy had become. The Popes had become involved in politics, and national leaders had been manipulating the Papacy. It had been strictly Italian, and specifically Roman, for centuries. Now the King of France wanted to take control.
Pope Benedict XI, an Italian, was in office for just eight months. The College of Cardinals had wanted a Pope who wouldn't be overly hostile toward the King of France. Benedict had un-excommunicated the King, but within a year he had excommunicated many powerful figures close to the King. Benedict had died suddenly and suspiciously at Perugia. The French King's minister of suspected of having poisoned Benedict.
There was a Popeless gap of a year while the French and Italian cardinals quarreled. They finally selected Raymond Bertrand de Got, born in Villandraut, Aquitaine. For a Papal candidate he was, unusually, neither Italian nor a cardinal. But he was a friend of the King of France.
He took the name Pope Clement V. He was at Bordeaux when he was formally notified of his election to the Papacy, and told to travel to Italy. But he selected Lyon for his coronation in 1305. He immediately created nine new French cardinals, providing a numerical advantage for France in the next conclave.
In 1306 and 1307 he withdrew Boniface's Papal bulls that asserted Papal authority over all national leadership.
Philip IV had been charging the Knights Templar with usury, credit inflation, fraud, heresy, sodomy, immorality, and other abuses. So, later in 1307 Clement had hundreds of them arrested in France.
The Move to Avignon
Clement's Papal court had been at Poitiers, in west-central France to the north of Clement's home. In March 1309 he moved his court to Carpentras, near Avignon. The territory was officially part of the Kingdom of Arles within the Holy Roman Empire. However, the French King controlled the land on the opposite bank of the Rhône and had stronger influence over the Vaucluse than the remote Emperor in German-speaking territory far to the north.
King Philip IV leaned hard on Clement, pressuring him into subjecting Boniface VIII, dead for six years now, to a posthumous trial for heresy, later expanded to include sodomy. Clement pardoned Guillaume Nogaret for offenses against Boniface and the church, de-excommunicating Nogaret on the condition that he go to the Holy Land and serve with the next wave of soldiers. Clement also formally excused the King for everything he had said and done against Boniface, and he later officially disbanded the Order of the Knights Templar.
Clement's successor moved the Papal court into Avignon itself. Below we see the medieval walls of the city surrounding the opulent Palace of the Popes.
Beginning the "Babylonian Exile"
Jacques Duèze had studied medicine on Montpellier in southern France, and law in Paris, and taught both canon and civil law. He became Bishop of Fréjus in 1300, and transferred to Avignon in 1310.
Pope Clement V died in 1314. Although he had stacked the deck with several added French cardinals, there was another Popeless interregnum for two years. King Philip finally set up a conclave of 23 cardinals in Lyon. They elected and crowned Duèze, who took the name Pope John XXII.
John XXII felt that the Spirituals movements had too much enthusiasm for the concepts of the Absolute Poverty of Christ and Apostolic Poverty. He officially condemned the pro-poverty group known as the Fraticelli in 1317. In 1322 he convened a group of experts to study (meaning "reject") the idea that Christ and the apostles owned nothing. There was some disagreement, apparently not everyone had gotten the message that the Pope had a definite outcome in mind. But, the majority rejected the idea, as it would condemn the church's right to own property and treasures. The followers of Francis of Assisi objected, but the Pope issued a bull declaring the doctrine that Christ and his apostles had no possessions to be "erroneous and heretical". He later issued other bulls declaring that he was correct, and earlier bulls asserting poverty were actually wrong.
Today's TV fund-raisers preachers are the side of Pope John XXII.
Some came to refer to the Avignon Papacy as the "Babylonian Exile" of the church. Those in Avignon agreed, because didn't they have it rough in a city that hadn't been capital of an empire? But it was intended as criticism — the "exile" was in a city rapidly becoming more and more opulent, like Babylon at its peak.
The Avignon Popes used gold and silver dishes at banquets, and wore expensive outfits. Churches were billed an annual tithe or 10% on property; first years' salaries for bishops were taken as annates, and pardons were sold along with dispensations to allow illiterate men to become priests and for converted former Jews to visit unconverted parents.
In 1348 Clement bought the town of Avignon from Joanna I, Queen of Naples and Countess of Provence, for 80,000 florins.
The Papal library in Avignon became the largest in Europe in the 14th century, with 2,000 volumes.
Pope Boniface VIII had formed the University of Avignon from the existing schools in the city in 1303.
In 1413 Antipope John XXIII founded the university's department of theology. It remained quite small a long time. In the 16th and 17th centuries the university developed a department of medicine. The university dwindled during the chaos of the Revolution, and it was closed down and abandoned in 1792.
An annex of the Faculté des Sciences d'Aix-Marseille was created in Avignon in 1963. It was gradually improved over the following two decades. In 1984 the Université d'Avignon et des Pays de Vaucluse was established. Located within the old city walls, now has a little over 7,000 students.
The stone Pont Saint-Bénézet, commonly called the Pont d'Avignon, originally had twenty-one piers. Only four remain, and the bridge ends out in the middle of the river.
A bridge was built here between 1177 and 1185, possibly all wooden, or maybe wood spans between stone piers. It was destroyed in 1226 during the Albegensian Crusade, when King Louis VIII of France besieged Avignon.
The bridge had strategic significance. It was the only river crossing between the territories controlled by the Pope and by the King of France. It was also the only bridge across the Rhône from Lyon all the way downstream to the Mediterranean.
The all-stone bridge was started in 1234. It was abandoned in the mid 1600s, because arches would collapse during major floods of the Rhône. It was missing four of its 22 arches by 1644. A large flood in 1669 carried away more of the bridge. More arches have collapsed or been demolished over the years, so only four arches remain today.
The first few Avignon Popes built the walls around the city. They look impressive but they're made of soft limestone. The Popes relied on their palace, the Palais des Papes. It's built on the Rocher des Doms, the 35-meter-tall stone outcropping, and its walls are five and a half meters thick.
Notre Dame des Doms, the cathedral, was mostly built in the 12th century. It's to the left in the below picture. Its bell tower was rebuilt in 1425, topped by a gilded statue of the Virgin Mary in 1859.
The Palais des Papes, the Palace of the Popes, is the large building to its right. This is just a small segment at one end. it's the largest medieval Gothic palace in Europe.
Pope Benedict XII (in office 1334-1342) built the Old Palace on the high Rocher des Doms, where the old episcopal palace of the Bishops of Avignon had stood. His successor Pope Clement VI (1342-1352) extended this by building the New Palace. It occupied an area of 11,000 square meters (or 118,400 square feet) by the time it was finished.
The enormous size allowed the church bureaucracy to grow. The Curia, the Papal administration, had 200 employees in the late 1200s. It was over 300 by 1300, and had grown to 500 by 1316. In addition to this, there were over 1,000 lay officials working within the Palace for the Pope's administration. Successive Popes continued reconstruction until 1364, adding towers to put their personal signatures on it.
The Palace remained under Papal control until the French Revolution, when it was already in bad shape before being sacked by Revolutionaries. Like many historic sites, it became a prison under Napoleon. It became a museum in 1906.
The Avignon Papacy began in 1309.
Seven Popes resided here:
• Clement V (1305-1314)
• John XXII (1316-1334
• Benedict XII (1334-1342)
• Clement VI (1342-1352)
• Innocent VI (1352-1362)
• Urban V (1362-1370)
• Gregory XI (1370-1378)
Urban V decided to return the Papacy to Rome. Rome had really gone downhill. When Clement moved to Avignon, Rome had become uncontrollable. The city was filled with battles between the military forces of aristocratic Roman families. The Lateran Basilica, the cathedral church of Rome and the seat of the Roman pontiff, had been destroyed in a fire. The Papal States had been entrusted to a team of three cardinals who were loosely controlling the territory. Papal military forces were fighting the Venetian army. Urban turned around and returned to Avignon.
Pope Gregory XI, just as French as the other six Avignon Popes before him, also planned to return the Papacy to Rome. He managed to pull that off, arriving in January of 1377. However, he died just over a year later, in March 1378. That led to the Western Schism.
Two Popes At Once
Romans rioted after Pope Gregory XI died, demanding that the next Pope had to be from Rome. But, there was no one in Rome remotely qualified for the job.
The cardinals elected Bartolomeo Prignano from Naples, who was the Archbishop of Bari. He took the throne as Pope Urban VI.
He had been a good administrator in the offices in Avignon, but he wasn't suited to be Pope. He was suspicious and prone to fits of rage. Most of the cardinals who had just elected him regretted their choice. The majority of them left for Anagni, a hill town southeast of Rome, where they had a new conclave and elected a rival Pope.
Pope #2, Robert of Geneva, took the name Clement VII. Since Rome was occupied and Avignon was all set up, he went to Avignon.
This was a mess. There had been Antipopes before, claimants to the Papal throne. But this time there were two Popes selected by the same group of church leaders. National leaders had a choice of two Popes.
|Avignon: Clement II||Rome: Urban VI|
|Denmark, England, Flanders, Holy Roman Empire, Hungary, Ireland, Norway, Portugal, Poland(-Lithuania), Sweden, Republic of Venice and other northern Italian city-states||France, Burgundy, Savoy, Aragon, Castile and León, Naples, Cyprus, Scotland, Wales|
When each died, their faction selected a replacement. Boniface IX (1389-1404) in Rome and Benedict XIII (1394-1423) in Avignon.
When Boniface died in 1404, the 8 cardinals backing him offered to not elect a new Pope if Benedict would step down, although what was supposed to happen then? Benedict of course refused. So now there was a new Pope Innocent VII in Rome.
|Clement VII (1378-1394)||Urban VI (1378-1389)|
|Benedict XIII (1394-1423)||Boniface IX (1389-1404)|
|Innocent VII (1404-1406)|
You may be ready for a pastis. It's an anise-flavored apéitif especially popular in southeastern France. It's a liqueur, bottled with sugar.
It's served neat with a small jug of water. You blend it to your taste, usually 5:1 water to pastis. The drop in alcohol content from the original 40-45% brings the terpene oils out of solution. The drink changes from dark but transparent yellow to a light yellow cloudy liquid.
Pisa Adds a Third Pope
Gregory XII had already created four new cardinals in 1408, two of them his nephews, despite promising before his election to keep things as they were. The existing cardinals complained, and refused to attend the service where the four new cardinals would be installed. One of the protesting cardinals left for Pisa.
The Pope sent one of his nephews, along with military backup, to seize the absent cardinal and bring him back by force. That caused seven more cardinals to leave immediately, followed by another who was just arriving.
The protesting cardinals published a manifesto calling for a general council in Pisa in 1409, hoping to end the schism. Pope Benedict XIII in Avignon called for a council of his own. The Roman Pope Gregory XII said that he that was also going to have an independent council, but he fled with the one cardinal that remained faithful to him.
The Universities of Paris, Oxford, and Cologne, along with many distinguished scholars of church law, sent delegates to Pisa in support of the protesting cardinals. Royal leaders, on the other hand, no longer had to care as they no longer relied on the support of the rival Popes.
The College of Cardinals, the supporters of both Rome and Avignon, met for the Council of Pisa in March 1409. They hoped to end the Western Schism by deposing both Benedict XIII in Avignon and Gregory XII in Rome. As there was no one undisputed pope to call for a general council, the Holy See should be considered vacant. It was up to them to elect an undisputed pope. They were 22 cardinals, 80 bishops, and 4 patriarchs, plus representatives of 100 bishops and 87 abbots who couldn't make it to Pisa, plus 300 doctors of theology or canon law, plus ambassadors from all the Christian nations. They were meeting under the presidency of a cardinal who had been named before the Schism had begun. What could possibly go wrong?
"This will be easy", they figured. "Accuse them both of schism and manifest heresy, they will step down, and we'll elect a single replacement."
It did not go well.
When they finally got around to reading the document listing all the charges against the two Popes, it took over three hours.
Two months later the Patriarch of Alexandria read the group's decision. Benedict XIII and Gregory XII were schismatics, notorious heretics, guilty of perjury, violators of promises, and unworthy of the office of Pope. They were forbidden to consider themselves as Supreme Pontiffs, and everything they had done was annulled. The seat of the Holy See was declared to be vacant.
Everyone at the council was happy, but Benedict and Gregory had no plans of going along with the decision.
The cardinals in Pisa then elected a new Pope, Alexander V.
Now there were three Popes — Benedict XIII in Avignon, Gregory XII in Rome, and Alexander V in Pisa.
One common complaint about the Council of Pisa (in addition to adding a third Pope) was that they weren't necessarily authoritative. If bogus Popes appoint a group of bogus cardinals, can those bogus cardinals really elect a true Pope to replace the two bogus Popes?
But for now, the Pisa line of Popes continued. England, France, Bohemia, Portugal, parts of the Holy Roman Empire, and some Italian city-states recognized the Popes of Pisa.
|Clement VII (1378-1394)||Urban VI (1378-1389)||Alexander V (1409-1410)|
|Boniface IX (1389-1404)|
|Benedict XIII (1394-1423)||Innocent VII (1404-1406)|
|Gregory XII (1406-1415)||John XXIII (1410-1415)|
The Council of Constance in 1414-1418 finally straightened things out. Somehow they managed to make the case that all three of the current Popes were invalid, and they were going to elect one that would be the One True Pope. That was Martin V.
Pope Benedict XIII (Avignon) simply refused to accept the decision of the Council of Constance. The Holy Roman Emperor sent representatives, joined by diplomats of several nations, but Benedict said he was still the Pope. Eventually, when Martin V was elected Pope, meant to be the only one, Benedict fled to a castle in the Kingdom of Aragon. Only the Kingdom of Aragon recognized his claim. He stayed there, insisting he was the Pope, until he dead in 1423.
Pope John XXIII (Pisa) attended the Council of Constance, at least for a while. Things quickly turned against the idea of a Third Pope. Disguised as a postman, John XXIII fled downriver along the Rhine, accompanied by Frederick IV, Duke of Austria. He was forcibly returned to the council, where he was tried and found guilty for heresy, simony, schism, and immorality. Gibbon's The Decline and Fall of the Roman Empire reports: "The more scandalous charges were suppressed; the vicar of Christ was accused only of piracy, rape, sodomy, murder and incest." Gibbon was always grinding his anti-religion ax, but you do wonder what all John XXIII got up to. John was imprisoned in Germany, but ransomed by the Medici.
Gregory XII (Rome) was the only one with a graceful end. He was convinced to resign, and retired from public life. His was the last Papal resignation until Benedict XVI in 2013.
Another evening, time for another pastis.
Avignon After the Popes
Avignon and the surrounding territory continued to be Papal possessions, until the French Revolution. The Hôtel des Monnaies, the Papal mint, was built in 1610.
However, the French Kings maintained a large military garrison directly across the Rhône at Villeneuve-lès-Avignon. Starting in the 1400s, the French Kings ruled Avignon as a part of their kingdom. Cardinal Richelieu was exiled to Avignon in 1618.
After the French Revolution, France expanded. Avignon and Comtat-Venaissin were combined with the former principality of Orange in 1793, forming the French département of Vaucluse. In 1814, the Pope recognized the annexation of Avignon.
Continue visiting Avignon:
Or, somewhere else in France: | <urn:uuid:c8fb5200-b2c6-4100-8a0e-9292db0211ce> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/france/avignon/avignon-papacy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00189.warc.gz | en | 0.975641 | 4,224 | 2.71875 | 3 |
What is a commercial vehicle?
Commercial vehicles are, broadly speaking, vehicles used for business, in a range of commercial applications. Commercial motor vehicles come in a range of different vehicle types, from utes to company cars or passenger vehicles like buses.
Light commercial vehicles typically have smaller payload sizes and may incorporate features normally only seen in road vehicles such as rear seats, double cab layouts and luxury front seats.
The key is finding the commercial vehicle that best matches the intended business purpose and any legal constraints such as emissions or the vehicle’s gross vehicle weight rating (GVWR or GVW) for the area the commercial vehicle will be operating in. For example, some streets don’t allow commercial vehicles over a certain weight, and in some cities only electric vehicles are permitted.
Other considerations for finding the right commercial vehicle include the type of things it will be used to transport, which is particularly important for goods vehicles. This includes the available cargo space, or load space, in the vehicle, the load bay (or load area), or if the vehicle will be used for towing.
Modifications to the vehicle may be needed if it will be used to transport hazardous materials according to the rules issued by the local transportation authorities.
Some commercial vehicles may only be used off-road or part time for personal use, in which case they may be eligible for rebates (for tax purposes). Check with your local tax authority for more information.
The best way to manage your commercial vehicles is with fleet management software. Fleet management software helps you to right-size the number and type of commercial vehicles you run, making sure you find the right balance between economy and vehicles that are suited to the task. It can also help you to dispatch the most fuel efficient vehicle for the job, reducing your overall cost per mile and improving your profitability. | <urn:uuid:222ce714-dbdb-4e07-951f-19b95e9fb0d9> | CC-MAIN-2022-40 | https://inseego.com/au/resources/fleet-glossary/what-is-a-commercial-vehicle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00189.warc.gz | en | 0.935845 | 373 | 2.703125 | 3 |
Current cyber threats are varied, ranging from sensitive data and infrastructure infiltration to brute force and spear-phishing attacks. Despite their variations, one thing is common about cyber threats – they do not discriminate organizations from individuals or small companies from big enterprises when looking for targets. What exactly are these cybersecurity threats making headlines today?
What are Cybersecurity Threats?
A cyber threat, otherwise known as a cybersecurity threat, refers to a malicious activity seeing to damage or steal data. By and large, potential threats that include data breaches, computer viruses, malware, and denial of service attacks disrupt digital life.
A recent post published by SentinelOne on the history of cybersecurity highlighted the first case of cybersecurity threat. Bob Thomas discovered that a computer program could move across a network, leaving a small trail wherever it went. Bob christened the program creeper and designed it to travel between Tenex terminals.
Ray Tomlinson, the guy who invented email, created the first computer worm by designing the creeper program to self-replicate in a separate incident. It is striking to hark back to where it started and where we are now, in an era of complex cyber threats, such as fileless malware, state-backed attacks, and sophisticated ransomware. It is hilarious to realize that the antecedents to cybersecurity threats were not actively malicious software and did not cause any damage to sensitive information. However, the research foundations of cybersecurity encountered a quick turn to criminality.
Today, the term cybersecurity threat exclusively describes information security issues. Malicious actors mount cyber threats and attacks against targets in cyberspace. The attacks can be severe, potentially threatening businesses and human lives.
Why are Cybersecurity Threats Such a Big Deal?
Needless to say, cyber threats and attacks matter so much today. They can disrupt system operations, adversely impact personal devices, computers, and IoT devices, making information and services unavailable to authorized users. In addition to that, cyber attacks can result in the loss of valuable information, including medical records, financial data, and personally identifiable information (PII).
What’s worse, cyber threats can aversely affect critical infrastructure. Cyber attacks potentially cause electrical blackouts, lock pipelines, or breach national security secrets. Meanwhile, it remains practically impossible to imagine what life would be without digital technology. It is not an overstatement to say that cyber threats can affect the functioning of life in a society that is highly dependent on technology.
Information storage on mobile phones and laptops makes it easier for malicious actors to find an avenue into a corporate computer network. Unquestionably, the volume of data is practically exploding by the day. Statistics show that the amount of data created, captured, copied, and consumed globally reached a new high in 2020 and will exceed 180 zettabytes by 2025. Organizations are increasingly collecting user information and storing it in public networks, exposing it to vulnerabilities
Attacks are Becoming Sophisticated
Hackers are devising new ways and tactics to launch sophisticated and frequent threats. The Microsoft Digital Defense Report reveals that “threat actors have rapidly increased in sophistication over the past year, using techniques that make them harder to spot and that threaten even the savviest targets.” A case in point is the nation-state actors engaging in new reconnaissance techniques that increase their chances of compromising high-value targets. In other incidents, criminal groups targeting enterprises migrate their infrastructure to the cloud to hide their activities among legitimate services.
Noteworthy Microsoft report findings include: ransomware has become the most common reason behind incident response engagement; nation-state actors are frequently using credential harvesting, malware, and VPN exploits; IoT threats are constantly expanding and evolving, with the first half of 2020 experiencing an approximate 35 percent increase in total attack volume compared to the second half of 2019.
Attacks are Increasingly Becoming Prevalent
On top of attack sophistication, cybersecurity threats are becoming more prevalent. An article published on UpGuard mentions that both “inherent risk and residual risk is increasing, driven by global connectivity and usage of cloud services, like Amazon Web Services, to store sensitive data and personal information.” The post further adds that the widespread poor configuration of cloud services paired with more sophisticated cyber criminals means the risk organizations face from successful cyber attacks is rising.
It is apparent that some industries are more vulnerable to attacks than others simply due to their business nature and the value of information assets. With the recent data breaches news, it is not an exaggeration that there is a considerable upsurge in attacks from increasingly common sources in the workplace. On top of this, the current COVID-19 pandemic that has triggered sudden and unpremeditated work from home approaches is progressively making inroads for cybersecurity threats.
Organizations are Still Operating Below the “Security Poverty Line”
Most organizations and government agencies still operate without proper security practices in place, making them vulnerable to cybersecurity attacks. Despite the increasing data breach incidents, some small businesses spend nothing at all to protect themselves from attacks. Other organizations risk their online safety by operating at or below the ‘security poverty line.’ Oblivious of the approaching danger, enterprises still expose identity and personal information to the web via cloud services.
We can all acknowledge that gone are the days of simple perimeter security tools, like firewalls and antivirus, being the sole security measures for an enterprise. It turns out that C-level executives and business leaders can no longer leave information security responsibility to security personnel.
Regulations Mean You Cannot Ignore Cyber Threats
The General Data Protection Regulation (GDPR), PCI DSS, HIPAA, FISMA, and GLBA are some of the stringent regulations that highlight organizations cannot ignore cybersecurity. Governments and industries around the world are bringing more attention to cyber threats and attacks. One way they are doing this is to enact and require all organizations to comply with regulations requirements. Regulations principally compel cyber attack victims to reveal details about a data breach, approve a data protection officer, require subject consent to process or share user information, and implement controls to enhance data privacy.
Cybercriminals commit their malicious acts for different intents. Mainly, they attack organizations for financial gains. A desire to steal money continues to be the principal motivator behind cyber attacks, according to Verizon’s annual Data Breach Investigations Report. Key takeaways from the report indicate that 86 percent of data breaches are financially motivated, up from 71 percent in 2019. In addition to that, 67 percent of breaches resulted in credit card numbers theft. Other crucial data targets include social security numbers and login credentials.
Typically, financially motivated data breaches include direct theft of victim’s money by hacking their bank accounts or stealing financial information. Besides that, malicious actors can make money by selling stolen credentials on the dark web. A look into the pricing of stolen identities for sale on dark web marketplaces shows that credit card details cost between $0.11 to $986 while hacked PayPal accounts sell between $5 and $1,767.
Besides financial gains, cybercriminals launch attacks for espionage, ideology, and other secondary motivations, such as the desire to steal intellectual property ad trade secrets. Security experts and agencies have accused criminals of meddling in current and corporate affairs, which forms the modern-day version of espionage.
Other than espionage, some cyber actors are motivated by anger. In this case, they leverage their skills and hacking tools to target companies directly. Infamous hacker groups, like Anonymous, also use their expertise to compromise large organizations and call the public’s attention to something the hacktivists believe is a crucial issue. Different causes, such as freedom of information, human rights, or religious believes, drive hacktivism.
Prevalent Cybersecurity Threats
Cybercriminals and malicious insiders have an abundance of techniques and tactics to deliver attacks. Some of the popular types of attacks and top cybersecurity threats include:
- Malware: also known as malicious software, is an umbrella term covering viruses, worms, trojans, and other harmful computer programs attackers use to wreak destruction and gain illegal access to systems and information.
- Phishing and Spear-Phishing: phishing attacks are a means to lure potential targets into divulging information, such as credentials and bank details. Attackers combine deception and social engineering attacks, such as urgent requests or scare tactics in phishing emails, to persuade victims to take action, such as opening malicious links or attachments. On the other hand, spear-phishing is a sophisticated and more elaborate version of phishing. Unlike phishing attacks that target many victims, spear-phishing targets specific individuals or organizations seeking unauthorized access to systems and data. Cyber actors frequently use social media sites to collect target’s information needed to personalize messages and impersonate users.
- Ransomware Attacks: ransomware attacks are a form of malicious program that encrypts the victim’s files. Ransomware attackers send a malicious link that installs malware once users click on it. They displace a message to demand a ransom from victims to restore access to systems and data. Typically, hackers show instructions for victims to pay a fee and get a decryption key. Ransomware costs range from a few hundred dollars to thousands, primarily payable in Bitcoin.
- Internet of Things (IoT) Exploits: currently, there are security vulnerabilities in millions of Internet of Things (IoT) devices. These flaws could potentially allow cybercriminals to knock devices offline or control them remotely. For instance, various vulnerabilities affect TCP/IP stacks responsible for communication in IoT devices.
- DDoS and DoS: Denial of service (DoS) attacks flood systems with traffic, making resources unavailable to authorized users. Conversely, a distributed denial of service (DDoS) attack uses multiple devices or machines to flood a targeted IT resource. Both DoS and DDoS attacks overload networks, servers, or web applications to disrupt regular services.
Cybersecurity Best Practices
Businesses and individuals alike should relinquish the ‘not much to steal’ mindset regarding cybersecurity threats. It is entirely out of sync with today’s cybersecurity to think that cybercriminals will pass over you while launching attacks because you run a small business. The factual situation is that 43 percent of cyber attacks still target small businesses, and 60 percent of victims of a data breach go out of business within six months. Individuals are also targets, often because they upload their personal information on insecure mobile devices and public clouds.
How can your business avoid becoming the next victim of an attack? Here is a list of cybersecurity best practices that businesses and individuals can implement today.
- Use security tools: One of the first lines of defense in cybersecurity is a firewall and antivirus programs. The Federal Communications Commission (FTC) recommends installing a security tool like a firewall to prevent outsiders from accessing sensitive information on a private network. FTC also cautions organizations to ensure that their operating system’s firewall is enabled. With employees working remotely, businesses should ensure they enable and update their security tools. They can deploy security technologies based on machine learning and artificial intelligence to automate threat detection and response. It would be best if users set antivirus software to run scans regularly and after each update.
- Cybersecurity awareness training: organizations should train employees in security risks and principles. They can establish security practices and policies for employees, such as requiring strong passwords and implementing appropriate systems and internet use guidelines that highlight penalties for non-compliance with company policies
- Endpoint security: businesses and individuals should operate secure machines. In that case, they should ensure devices have the latest security software, updated web browsers, and patched operating systems that combat malware and other online attacks.
- Backup data: make backup copies of crucial data and sensitive information. If possible, implement an automatic backup solution that stores information copies offsite or in safe cloud locations
- Develop and update security policies: small businesses should shift from operating by word of mouth and intuitional knowledge to documenting protocols and procedures in cybersecurity. Resources such as the FCC Cyberplanner 2.0 and SANS Security Policy Templates provide a starting point for security documentation.
- Access control: organizations can improve their cybersecurity postures by limiting user access to sensitive information and systems and restricting their authority to install online applications. In this case, no one employee should have access to all data systems. Instead, companies should give users access to specific resources and information that they need for their job. Besides that, insiders should not be allowed to install applications without the IT department’s approval. At the same time, users should use unique, strong passwords to access systems and online accounts to combat insider threats. Businesses can implement robust access control mechanisms, preferably by implementing multi-factor authentication that requires additional information beyond usernames and passwords to grant access. | <urn:uuid:729ad4d3-5576-4aac-8675-c27f6827c025> | CC-MAIN-2022-40 | https://cyberexperts.com/cybersecurity-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00390.warc.gz | en | 0.92587 | 2,623 | 3.125 | 3 |
An Attacker is an individual, group, organization, or government that executes an attack. Not to be confused with a vulnerability. An attack in the physical world might be someone who jumps out of the bushes to rob you with a knife. Whether you are vulnerable to the threat (the actor holding a knife) depends on whether you have significant martial arts experience. If you are a Black belt in Karate, you might not be vulnerable to the threat. If the threat is a gun instead of a knife, perhaps you become vulnerable.
In the same way, online threats are hacker organizations. They hack with many different tools and methods. Whether you are vulnerable depends upon you and your company’s preparations. Are your systems patched? Do you train and govern your staff on common attack methods so they can be spotted and deleted before causing a problem?
Synonym: Threat, Threat Actor
Source: Barnum & Sethi (2006), NIST SP 800-63 Rev 1 | <urn:uuid:5d3f112d-3821-4fc3-860f-fed4e6436f82> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/attacker/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00390.warc.gz | en | 0.947516 | 198 | 3.140625 | 3 |
Data Sovereignty and GDPR [Understanding Data Security]
Data sovereignty can cause confusion for many security professionals, so we are going to cover what it is and how it relates to your company’s data security.
Why is data sovereignty important? Data sovereignty is important because it regulates how data should be governed and secured, specific to the country where it was collected and not where the collector resides.
What Is Data Sovereignty?
Data sovereignty is the requirement that information is subject to the location’s regulations from where it was collected and processed.
Organizations face several problems in interpreting that requirement. Sovereignty is a state-specific regulation requiring that information collected and processed in a country must remain within the boundaries of that country and must adhere to the laws of that country.
This can provide complex, interconnected, and conflicting laws that companies must follow. For example, a country collecting information in the EU might use Microsoft Azure or Google cloud servers. Both are U.S. companies governed by U.S. law, which means that they could be subject to legal requests from the government to disclose, a violation of EU data privacy laws.
In a business world where international commerce and cloud storage are the norms, these types of situations can put organizations in incredibly challenging conditions.
Additionally, some terminology is often conflated with sovereignty:
- Data residency: Residency often refers to instances where a business or other organization stores information in a specific geographical location to find favorable regulatory compliance This could include shifting locations to show that most of their business operations are in another country for financial reasons.
- Data localization: In the strictest of terms, localization refers to the requirement that data created in a specific location remains in that location. This can include compliance regulations, such as the European Union’s General Data Protection Regulation (GDPR), over personal data related to a country’s citizens that require organizations to keep that information in local servers and limit or forbid transmission outside of national borders.
- Indigenous data sovereignty: A branch of sovereignty, indigenous sovereignty applies specifically to the rights of indigenous nations in the United States, Canada, and Australia (among other countries) to manage the privacy of their own information.
Landmark Cases Establishing Data Sovereignty
The emergence of sovereignty as a legal concept on a global scale can be traced to the PRISM program, an observation and clandestine information collection program operated by the National Security Agency that was exposed by Edward Snowden.
PRISM and the U.S. PATRIOT Act
The National Security Administration (NSA) observes and collects information, including texts, images, movies, phone calls, social network details, and video calls across various platforms and providers. Outside of its dubious legality, the U.S. was also collecting information from foreign nationals caught in the net.
Alongside the PRISM program, the U.S. PATRIOT Act gave the U.S. government the right to collect data from any server located physically within U.S. borders, which often included foreign information governed by different types of privacy and security laws.
Microsoft v. The United States
While this case didn’t set any standards for data sovereignty into law, it did start the conversation. Another case, Microsoft Corp v. The United States served as a landmark for the concept.
In 2013, the U.S. Department of Justice sought to collect information from Microsoft servers concerning drug trafficking cases under investigation. Microsoft refused because the information was stored in a center in Ireland, outside (according to Microsoft) U.S. jurisdiction and subject to Irish data laws.
Microsoft lost the initial legal challenge but appealed to the 2nd U.S. Circuit Court of Appeals, which disagreed with the findings and sent the case to the U.S. Supreme Court, during which Congress passed the CLOUD Act. This law stated, essentially, that a U.S. company must turn over information related to law enforcement regardless of where that information is stored. However, it added specific requirements for protecting the information of foreign nationals whose information exists in servers operated by U.S. companies in non-U.S. jurisdictions, specifically in cases where the U.S. has data-sharing laws in place with these countries.
The CLOUD Act also set standards for foreign countries seeking access to information housed in the U.S., pending oversight by U.S. courts and demonstration of legal and evidentiary merit.
How Does Data Sovereignty Relate to the GDPR?
The GDPR was enacted in participating EU countries in 2018, and set strict standards for protecting privacy and ownership of consumer information. These laws also covered sovereignty.
Under the GDPR, any information collected from citizens of the EU must reside in servers located in EU jurisdictions or in countries with a similar scope and rigor in their protection laws. This way, the information will fall under the strict security laws of the EU and citizens will remain under that protection.
Specifically, this law applies to both processors and controllers alike, which means that both companies collecting information and those offering services for data collection fall under this law.
What does that mean for providers and businesses outside of the EU? If you operate in the EU or serve businesses by collecting information from EU citizens, you fall under the GDPR. Violation of this regulation could result in fines of up to 4% of your total global annual revenue.
How To Approach Data Sovereignty With Cloud Service Providers
Needless to say, if you are working with an international customer base, or operating in foreign countries, then data sovereignty is an important aspect of your business.
With that in mind, there are several factors your organization should consider:
- Locations of servers: There should be clear and agreed-upon locations for storage and processing. Some cloud providers will attempt to divide cloud coverage by “region” to maintain flexibility, so the more specific these providers can be, the better.
- Local jurisdiction and privacy laws: Your organization should have a good understanding of the governing privacy and laws applicable to that information. These laws could impact how that information is governed going into or coming out of that country, and if those types of file transfers are even legal.
- Map data ownership and consumer rights: Alongside privacy and security laws, you should have a good understanding of consumer rights. For example, information protected by the GDPR gives ownership to the consumer, which means that these individuals can demand their information be provided to them or deleted. Regulations like the GDPR—or more recently the California Consumer Privacy Act (CCPA)—place strict limits on how that information can be processed and used.
- Determine information governance tools: Any cloud or service provider should also provide critical information governance features like comprehensive audit logs, retention, remediation tools, and advanced analytics.
Compliant and Secure Data Management With Kiteworks
The Kiteworks platform provides technology- and industry-agnostic security controls that meet the governance, compliance, and security requirements of almost any application. Features like immutable logs, secure file transfer, and business analytics support businesses juggling complex regulations while maintaining enterprise operations.
To support such operations, the Kiteworks platform has the following features:
- Security and compliance: Kiteworks utilizes AES-256 encryption for data at rest and TLS 1.2+ for data in transit. The platform’s hardened virtual appliance, granular controls, authentication and other security stack integrations, and comprehensive logging and auditing enable organizations to protect sensitive data while ensuring efficient governance and compliance.
- Secure file sharing: Kiteworks supports secure file sharing for third party risk management (TPRM), enabling organizations to share confidential data, such as personally identifiable information (PII), protected health information (PHI), and intellectual property (IP), with third parties while remaining in compliance with industry and government regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), Federal Information Processing Standards (FIPS), and Cybersecurity Maturity Model Certification (CMMC), among others.
- SIEM integration: Organizations can keep their environments secure by integrating metadata from sensitive content communications with security information and event management (SIEM) data for single-pane-of-glass alerts, logging, and event response. Integrations include IBM QRadar, ArcSight, FireEye Helix, LogRhythm, among others.. Kiteworks also has integration with the Splunk Forwarder and Splunk App.
- Audit logging: Kiteworks enables immutable audit logging, enabling organizations to trust that they can detect attacks sooner while maintaining the correct chain of evidence to perform forensics. Since the platform merges and standardizes metadata from multiple sensitive content communication channels, its unified Syslog and alerts save security operations center (SOC) teams crucial time and helps compliance teams to prepare for audits.
- Single-tenant cloud environment: File transfers, file storage, and access to files occurs on a dedicated Kiteworks instance, deployed on premises, on Logging-as-a-Service resources, or hosted in the cloud by the Kiteworks Cloud server. Tist means no shared runtime, databases or repositories, resources, or potential for cross-cloud breaches or attacks.
- Data visibility and management: The CISO Dashboard in the Kiteworks platform gives organizations an overview of their data: where it is, who is accessing it, how it is being used, and if it complies. Help your business leaders make informed decisions, and your compliance leadership maintain regulatory requirements.
Get more details on how Kiteworks enables organizations to manage data sovereignty, centralizing metadata for all sensitive content communications in one pane of glass by scheduling a custom demo today. | <urn:uuid:6ad4efb0-1ac5-44b8-82f1-f7f7853fde36> | CC-MAIN-2022-40 | https://www.kiteworks.com/regulatory-compliance/data-sovereignty-gdpr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00390.warc.gz | en | 0.922323 | 1,977 | 2.921875 | 3 |
Are lean Six Sigma principles different from those of the “original” Six Sigma and lean methodologies? In this post we’ll find out the answer.
Lean Six Sigma Principles: An Overview
Lean Six Sigma principles are unsurprisingly derived from both lean and Six Sigma.
These principles include the lean principles:
- Define value. You can’t create value unless you first define it. In the lean methodology, value is equal to what a customer will pay for. Maximizing value therefore means maximizing the quality of the products and services customers will pay for.
- Map value stream. Value stream mapping is a type of business process mapping that analyzes the steps involved in creating value. Much like a supply chain, this amount will help managers and practitioners understand all the steps and resources involved in creating value for the end customer.
- Create flow. Flow refers to reconfiguring business processes and maximizing throughput. Business process mapping, a tool discussed below, is one way to do this.
- Establish pull. Reducing inventory is one way to create a pull-based system. These are systems that maximize throughput, follow just-in-time work methods, and reduce inventory, which is considered waste.
- Pursue perfection. Pursue perfection is the idea that continual improvement should be essential in any business process. This is discussed in more detail below.
They also focus on the core concepts of Six Sigma, such as reducing variation and defects.
Below we will identify an explore a few more of these principles in detail.
Reducing waste is one of the core principles of lean.
Waste can come in several forms and it can be measured.
According to the lean methodology, waste can include:
- Irrelevant labor
- Unnecessary transportation
- Excessive inventory
By reducing these types of waste, lean practitioners hope to minimize investments, improve employee productivity, enhance quality, and improve outcomes.
Minimizing Process Variation
According to Six Sigma, reduction in process variation will reduce defects and drive the other improvements covered above, such as performance improvements and increased efficiency.
In order to reduce variation, and therefore waste and errors, Six Sigma practitioners will use tools such as DMAIC and DMADV. These two data-driven techniques are intended to improve existing processes and create new ones.
The ultimate goal is to reach the Six Sigma level, or 3.4 defects per million opportunities (DPMO).
Minimizing defects is, as mentioned, one of the key aims of the Six Sigma system. Although it is not typically discussed in lean, Lean Six Sigma often recognizes that minimizing defects also minimizes waste.
By minimizing variation as mentioned above, defects can also be reduced, which can result in improvements to:
- Customer satisfaction
- Product and process quality
- Return on investment
- Process turnaround time
The tools mentioned above, DM, AIC and DMA DVR both used to minimize variations and defects. There are, however, quite a few other methods and techniques that are taught in Six Sigma training programs.
Lean Six Sigma will also introduce other tools, such as those that emphasize waste reduction.
Statistical Measurement and Analysis
Statistics is an integral part of Six Sigma.
Although it does have its place in lean, data science and statistical methods may find their way into Lean Six Sigma systems more frequently.
Using tools such as business process mapping, Lean Six Sigma practitioners can perform techniques that revolve around:
- Collecting data
- Measuring that data against benchmarks
- Analyzing that information
- Redesigning business processes
An important point to know, as we will see below, is that Lean Six Sigma systems vary. Their emphasis on statistics and data driven methods will therefore also vary.
Continual improvement is a critical piece of both lean and Six Sigma.
The Japanese term, kaizen, refers to the idea that process improvement should be embedded as a permanent part of the organization.
Lean Six Sigma will use tools such as the ones already mentioned, DMAIC and DMADV, as well as other techniques designed to be repeatedly implemented. Through the continual application of these techniques, Lean Six Sigma practitioners can enhance quality and performance across a number of dimensions.
Importantly, Lean Six Sigma emphasizes different areas than the original methodologies. For instance, lean will focus on the continual reduction of waste. Six Sigma, on the other hand, will continually attempt to minimize variation and defects. Lean Six Sigma may attempt to focus on both.
Raising Customer Satisfaction
An emphasis on customer satisfaction is a priority for both systems.
This is understandable since customer satisfaction drives business revenue.
The difference between lean, Six Sigma, and Lean Six Sigma is that they will take different approaches to improving customer satisfaction.
Differences Between Lean and Six Sigma
Some organizations claim that there is very little difference between lean and Six Sigma training programs.
They suggest that the ideas behind lean are already incorporated into Six Sigma.
According to this school of thought, there is no need to focus on Lean Six Sigma unless you are in a certain industry such as the armed forces or the public sector.
Others, however, suggest that the core principles and emphases do differ, as we have outlined above.
When evaluating training programs, it is important to examine that program in detail to determine the content of the program and whether it aligns with your own goals. | <urn:uuid:2aa17432-bffb-496e-9d45-664396d536dd> | CC-MAIN-2022-40 | https://www.digital-adoption.com/lean-six-sigma-principles/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00390.warc.gz | en | 0.938115 | 1,142 | 2.859375 | 3 |
Technology controls can also be known as a procedure or policy that provides a reasonable assurance that the information technology used by an organization operates as intended, that data is reliable and the organization is in compliance with applicable laws and regulations.
In Information Technology, we use controls as a check on business processes and these can be physical (security cameras, badges, etc.) or logical (part of the software). The following is a very general example showing how a logical control works to support a business requirement and control the separation of duties.
Sue has rights only to change the application code in the local environment and should not have the right to change any code in the production environment. Similarly, Don is into Testing and will have rights for testing the application only. As part of the logical control, the system would function so that Sue doesn’t even see the button to migrate to production; accordingly, Don’s screen would not have the edit source code button. This control is defined both in the physical structure of the organization and in the computer logic of the system.
Industrial Incorporation of Technology Controls
With the increase in Cybersecurity attacks, many industries have been trapped in their claws ranging to a huge loss to organizations. A report on the increase in cybercrime in 2020 by StanfieldIT has analyzed the cost for cybercrime to increase by $6 trillion in the future.
The Banking Industry is one of the highly sensitive areas for hackers need to keep their Controls very safe. A study performed by Schneiderdowns has detailed banking industry IT controls. This study classifies Technology Controls as General Controls and Application controls, where General Controls include controls over data center operations, system software acquisition, and maintenance.
And application controls include computer matching and edit checks are programmed steps within application software, they are designed to help ensure the completeness and accuracy of transaction processing, authorization, and validity.
Types of Technology Controls
There are various Technology Controls that can be used to secure the information without much effort. They can be defined as follows
- All traffic from inside to outside, and vice-versa, must pass through it.
- Only authorized traffic, as defined by the local security policy, is allowed to pass through it.
- The firewall itself is immune to penetration”
The firewall represents an indispensable technical component for network security concepts today, ranging from simple packet filters all the way up to powerful solutions with the direct support of specialized industrial protocols. Firewall designs, which range from software packages for PCs to industrially hardened products in metal housings for use at the field level, are every bit as diverse. The current threat of attacks plays a large role in this because it is significant in determining the correct technology and deployment location.
It is a long time since firewalls alone have been promoted as sufficient or the only measure for securing information in industrial plants or have even been viewed as synonymous with network security. Firewalls continue to represent core elements in the segmentation of networks and therefore are an essential part of any security strategy with respect to network security.
These requirements stated above can be secured using various applications of firewalls, such as Circuit Proxy, Application Proxy.
Virtual Private Networks is a technology that creates a safe and encrypted connection on the Internet from a device to a network. This type of connection helps to ensure our sensitive data is transmitted safely. It prevents our connection from eavesdropping on the network traffic and allows the user to access a private network securely. This technology is widely used in corporate environments.
A VPN works the same as a firewall like a firewall that protects data locally to a device wherever VPNs protect data online. To ensure safe communication on the internet, data travels through secure tunnels, and VPNs users use an authentication method to gain access over the VPNs server. VPNs are used by remote users who need to access corporate resources, consumers who want to download files and business travelers want to access a site that is geographically restricted. This restricted and secure network provided by VPN is a safe way for organizations to communicate their information and imperative data. The organization should be aware of the cost and limitations before applying this technology to their workplace.
An IDS is a security system that monitors computer systems and network traffic. It analyses traffic for possible hostile attacks originating from the outsider and also for system misuse or attacks originating from the insider.
A firewall does a job of filtering the incoming traffic from the internet, the IDS in a similar way compliments the firewall security. Like, the firewall protects an organization’s sensitive data from malicious attacks over the Internet, the Intrusion detection system alerts the system administrator in the case when someone tries to break in the firewall security and tries to have access to any network in the trusted side.
There are different types of Intrusion Detection systems that can be implemented in an organization based on the requirement. These are Network Intrusion detection system, Host based intrusion detection system, Perimeter Intrusion detection system and VM Based intrusion detection system. These can be used as per the industry and organization needs and depending on the infrastructure being used.
Access control is a process of selecting restrictive access to a system. It is a concept in security to minimize the risk of illicit access to the business or organization. In this, users are granted access permission and certain privileges to a system and resources. Here, users must provide the credential to be granted access to a system. These credentials come in many forms such as password, keycard, the biometric reading, etc. Access control ensures security technology and access control policies to protect confidential information like customer data.
The access control can be categories into two types-
- Physical access control
- Logical access control
Physical Access Control- Physical access control limits access to buildings, rooms, campuses, and physical IT assets.
Logical access control- Logical access control limits connection to computer networks, system files, and data.
The more secure method for access control involves two – factor authentication. The first factor is that a user who desires access to a system must show credentials and the second factor could be an access code, password, and a biometric reading.
The access control consists of two main components: authorization and authentication. Authentication is a process which verifies that someone claims to be granted access whereas an authorization provides that whether a user should be allowed to gain access to a system or denied it.
Apart from basic Technology controls there are various understandings as published in different research articles, which are to be incorporated in the routine activities to ensure the security and safety of information. It is also dependent on the industry to accommodate these regular actions and secure their organizations from the flaring attacks. | <urn:uuid:22eb014a-c465-401b-ad2d-3646479a8976> | CC-MAIN-2022-40 | https://www.lifars.com/2020/04/what-are-technology-controls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00390.warc.gz | en | 0.936101 | 1,371 | 3.0625 | 3 |
What is electronic access control and its components?
Typical electronic access control systems installed at buildings where people live and work nowadays include access cards or key fobs as credentials, card readers to authenticate that the person has been granted access and an electronic controller. A standalone reader includes all components - the processor, the reader and the control in one unit.
Many benefits from electronic access ensue from its components. Depending on the components, electronic access control systems not only let people in, but can also keep track of who got in and designate access based on need. System components enable person and role identification, approve access and keep people accountable.
1. Electronic Access Control Point
Although a door is the most common access control point, access can be controlled at windows or cabinet doors, too. In fact, any physical barrier that can be electronically controlled can serve as an access point. Turnstiles, parking gates, elevators and double doors can all be used as access control point components.
2. Access Cards and Key Fobs
You have probably experienced getting through a restricted door, where you had to present a card or a fob, enter a PIN code, or have your identity confirmed by a security guard with video surveillance. Access cards or fob readers have replaced old mechanical systems, in which you need to either unlock a door to let a familiar face in, or manipulate a mechanical device or an electric switch to open a door.
3. Keypads, Card Readers and Biometric Access Control
Electronic access card readers are usually placed near the main door frame of a building. They read the information in the credential and send it to the control panel for processing. If all is well (if the person does present verified credentials), the system lets them in.
If you work in high-risk areas, you might have experienced biometric access control, palm geometry or facial recognition tools that “read” your identity. These are seldom used in domestic and commercial buildings. In contrast, they are fairly popular at locations that require strict access control or double authentication.
A keypad requires you to share a passcode. A card reader grants access by placing the access card near the sensitive part of the reader. For biometric features, you need to have your eyes, your fingers or your palms authenticated.
4. Electronic Access Control Panel
The small computer that makes the decision of who gets in and who doesn’t is called an electronic access control panel. Often, it includes a standalone control panel unit. Advanced electronic access systems simulate a control panel from a desktop or a mobile app.
Electronic access control panels contain programmable processors which can assign specific roles, as well as time and date windows to persons authorized to exercise certain roles. Typical example include handymen, nannies or construction workers who need to enter occasionally, as well as remote visiting colleagues and freelance professionals working in a shared office space.
Why should one choose electronic access control over other forms of access control?
The advantages of choosing electronic access control over other forms are based on its versatile functionality. Older access control systems do not provide comprehensive options for identification, authorization, approval and tracking. Needless to say, because of its limited potential to verify who, when and how was responsible for an unauthorized access, conventional access control systems are less secure and reliable.
Here are some of the challenges of former mechanical access control tools that electronic access control successfully solves:
1. Say Goodbye to Lost or Stolen Keys
It is so easy to lose a mechanical key. If you recollect the number of times you’ve panicked after not being able to immediately find your keys, then it is easy to picture the advantages of a code that is accessible only by you and no one else.
Electronic access systems with smart cards can disable a lost card from a central controller. Even better - when the control panel of the electronic access system is integrated into a mobile app, you won’t have to spend a minute without safety, as you always have your smartphone at hand.
2. Time and Role-Based Access
A key grants access to the holder, whoever it is, anytime. You may have given the key to the tenant, but it’s so easy to lend it to another person who wouldn’t be normally expected to have access to a shared building at all times.
From the dashboard of an electronic access control, you can have overview of specific times and dates a person can be let into the restricted area. A group of persons, such as repair workers can get access once a month, a babysitter can get in from 8 to 10 pm, and the cleaning company can be authorized to enter Tuesdays only.
For coworking spaces, users can be distributed into groups based on their membership. This simplifies the use of conference rooms, individual offices, laundry or kitchen use, as well as special equipment stored in limited access areas.
3. Remote Access Control
Mechanical door lock cannot be controlled remotely. You need to either be present to notice a break-in or get a call from the police; not to mention the situations where you need to wake up in the middle of the night because a colleague in a rush has forgotten to lock the door or the need to assign security managers to multiple remote locations for your corporation.
None of these is pleasant. If you don’t use standalone units but opt-in for a network electronic access, you will solve all of these problems at once. With modern equipment integrated into the electronic access control systems, you can monitor, re-program and remove credentials from one central location.
4. Multiple Credentials
When a single credential is presented, the electronic access control system grants access. This makes it easy for intruders to copy or abuse the credentials in other ways. Multi-factor authentication, such as two-factor authentication granted only after you’ve entered a code on the keypad and had your finger scanned provides high-level safety in restricted environments.
5. Monitoring Reports
When someone tries to use a key in a lock and fails, you can never tell that the event happened, unless you catch them in the act. Someone can use a stolen key on several occasions, until the time is right or to get into a forbidden company area, thus causing damage more than once.
Since electronic access control systems record each transaction, you can keep an audit trail of all access attempts, and print out reports for specific areas, times and dates. When someone unauthorized gets in, you can react promptly by calling the law enforcement. The system can notify the police automatically or inform the person in charge of security that someone who isn't supposed to be in, is in.
Evaluating a system provider
Not all electronic access control providers use the same system, integrate all components or offer versatile contracts. Consequently, once you complete an initial risk assessment for the place you need to control, keep an eye on the following considerations:
- Can you integrate the new electronic access control into the old one?
- What are your business or residential needs - how many people and areas does the electronic access control need to serve?
- Do you need to install expensive equipment or use web-based or mobile app solutions?
- Does the software package include scheduled or random maintenance and a flexible contract?
- Are there off-site and on-site software solutions on offer?
- How easy is to keep track of the events?
- Is the provider available to serve your business internationally?
- How easy is to install and user-friendly is the electronic access control system?
Most advanced and customer-oriented electronic access businesses typically include highly scalable solutions, making it easy for the customers to feel safe, yet unencumbered with severe restrictions in an increasingly fast, connected and mobile world. | <urn:uuid:94291aa3-4f4e-4868-a4be-324881ecc1de> | CC-MAIN-2022-40 | https://www.getkisi.com/guides/electronic-access-control | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00390.warc.gz | en | 0.93887 | 1,598 | 3.3125 | 3 |
Google has open-sourced a differential privacy library that helps power some of its core products.
What it differential privacy?
Differential privacy is a method for analyzing data contained in a database and providing helpful insight from it, without disclosing the actual information contained in the data to the analysts. It’s meant to keep sensitive information usable but thoroughly anonymized.
“Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual’s data to be distinguished or re-identified,” noted Miguel Guevara, Product Manager, Privacy and Data Protection Office at Google.
“This type of analysis can be implemented in a wide variety of ways and for many different purposes. For example, if you are a health researcher, you may want to compare the average amount of time patients remain admitted across various hospitals in order to determine if there are differences in care.”
Using the library
Google uses the library to, for example, provide Google Maps users information about how busy a restaurant is over the course of the day.
The open-sourced library and the accompanying interface can be used by developers in a wide variety of sectors and for a wide variety of helpful features.
“Most common data science operations are supported by this release. Developers can compute counts, sums, averages, medians, and percentiles using our library,” Guevara shared, and noted that they designed the library so that it can be extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management.
“The real utility of an open-source release is in answering the question ‘Can I use this?’ That’s why we’ve included a PostgreSQL extension along with common recipes to get you started,” he added.
Why use this library?
The library has been released under the Apache License, meaning that developers can freely use it, distribute it, modify it and distribute modified versions of it under the terms of the license.
The difference between this Google differential privacy implementation and other existing ones is that this one can work with a database that includes multiple records per user.
Google privacy software engineer Damien Desfontaines pointed out additional pros in a Twitter thread:
OK so why am I so excited about this release? So many reasons! (A thread.)
Note before I start: all of what follows is my opinion, not my employers' =)https://t.co/mS8D6rKBDg
— Ted (@TedOnPrivacy) September 5, 2019
The most important thing about this release is that Google also provided a stochastic tester to help spot implementation glitches and problems that could make the differential privacy property no longer hold. This will allow developers to make sure their implementation works as it should.
Google is looking for feedback on the library from academic and technical communities around the world but, for the time being, will not accept pull requests. | <urn:uuid:8b8440aa-701a-4211-b56f-d4c67f0468f4> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2019/09/06/differential-privacy-library/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00390.warc.gz | en | 0.921317 | 630 | 2.515625 | 3 |
What is a Secure CDN and How Does it Work?
What is a Secure CDN and How Does it Work?
In recent years, there’s been plenty of hype surrounding CDNs. Once the exclusive domain of huge digital service providers like Facebook, Google, and Netflix, CDNs are now available to any organization (or even individual) that wants one.
However, there are several misconceptions about what CDNs are, what they do, and the difference between the CDNs available on the market.
What is a CDN?
CDN stands for Content Delivery Network, a geographically distributed group of servers that place website content geographically as close as possible to the end user. The objective is to improve website content loading times for each visitor by reducing the time needed for communications to travel between the visitor’s device and the server.
The concept is straightforward. Although data moves very quickly, it is still constrained by distance—it’s faster to load a website hosted in your city than a website hosted on the other side of the world. The image below shows each of the distributed servers included in Link11’s CDN, which ensures users worldwide can access online content in a matter of milliseconds.
Why Use a CDN?
When they think of a CDN, most people focus on the additional speed it can provide a website. That is a huge benefit and perhaps the most common reason why organizations initially consider using a CDN.
However, CDNs provide four major benefits, all of which are essential for business websites:
#1: Improve website load times. Hosting content closer to the user ensures faster page load times. This has an obvious benefit for the user, but also two substantial benefits for the website owner:
- More people are inclined to ‘click away’ from slow loading websites, reducing traffic. This increases the website’s ‘bounce rate.’
- Search engines penalize websites that load slowly—partly due to the higher ‘bounce rate,’ but there is also a direct penalty that causes slow websites to rank lower in search results.
#2: Reduce bandwidth costs. Paying for the bandwidth needed to serve users is typically the single highest cost of running a website. For very large websites, the cost can be extremely high. By caching website content at each distributed server, a CDN limits the amount of data transfer required to run the website, reducing bandwidth costs.
#3: Higher availability. From excessive traffic to server failures, plenty of events can disrupt website uptime. CDNs use distributed servers with redundancy built-in, enabling a website to handle much more traffic than it usually could while maintaining close to 100% uptime—even in the event of a server failure.
#4: Stronger security. Enterprise-grade CDNs offer a host of security capabilities, from DDoS prevention to on-page improvements. As a result, websites hosted by these networks are more resilient to criminal activity.
Of course, the real answer to why organizations use a CDN is usually quite simple: cost.
Take an e-commerce business as an example. Think about how much each of the following situations could cost the business:
- An unknown number of users click away from the site due to poor loading times.
- Search engines penalize the website for poor performance, ranking it lower in search results.
- An unexpected flood of traffic causes a server to go offline, preventing an unknown number of buyers from reaching the website.
For a business that bases its entire trade on website traffic, any of these situations could cost thousands, hundreds of thousands, or even millions of dollars in lost revenue. And that’s just one example. Any organization that relies on website traffic—either directly or indirectly—for revenue, exposure, or any other critical business need has a huge incentive to use a CDN rather than relying purely on a static web host.
What Makes CDNs Faster than a Normal Website?
As already noted, the global distribution of CDN servers brings online content closer to the end user, which naturally reduces load time. However, this isn’t the only way that modern CDNs improve website speed.
Modern CDNs optimize the hardware and software involved in delivering online content in several ways. Most importantly, they use load balancing techniques to distribute processing tasks across the available computing resources. This improves efficiency, optimizes response times, and avoids the danger of overloading individual resources—particularly during periods of heavy traffic.
CDNs can also reduce the amount of data transferred to each visitor by dynamically reducing the size of files to be sent. As you’d imagine, the smaller the files, the faster the web page or application will load. This process is invisible to the end user, except in the speed with which their content loads.
What Makes a Secure CDN?
When CDNs first came onto the scene, they aimed purely to improve load times and availability for static web pages. After a while, they began to focus more on delivering streaming video and audio content and online media services like Netflix. However, the most powerful CDNs now include a much wider range of services and solutions to address the needs of modern organizations.
Some of the most common security capabilities of an enterprise CDN include:
Securing HTTPS websites with updated TLS/SSL certificates. This ensures an unbroken standard of authentication, encryption, and integrity, protecting both the organization and website visitors from several common security vulnerabilities.
Edge protection. A CDN sits between an organization’s web servers and outside users. This makes CDNs ideal for preventing known security threats before they reach an organization’s assets. A common way to do this is by using proxy rules to prevent common cyberattack techniques such as request smuggling.
DDoS mitigation. Due to the CDN’s position at the network edge, it is ideally placed to intercept DDoS attacks before they disrupt internal assets. Enterprise-ready CDNs protect organizations from DDoS by identifying malicious bot traffic within seconds and rerouting it away from the target organization.
Protect Your Online Presence with Secure CDN
While many solution providers offer pure-play CDNs, Link11 provides the industry’s leading Secure CDN.
Secure CDN uses Link11’s AI/ML-driven cloud platform to identify known threats instantly and unknown threats in under 10 seconds on average. Once identified, threats are rerouted or blocked before they reach the customer’s assets, nullifying them before they even produce an alert for the organization’s security team.
In keeping with strict data protection laws in the EU and US, Secure CDN enforces a blacklist of countries where data cannot be transferred. This ensures customers can’t accidentally cause themselves to fall out of compliance by allowing data exfiltration to blacklisted countries.
Link11’s Threat Protection Shield provides 360° security for all online assets. This enables Secure CDN to:
- Instantly block common threats like request smuggling.
- Maintain data sovereignty and compliance.
- Ensure worldwide availability with direct connections to important Internet exchange points.
- Maximize cyber resilience by blocking threats outside the organization’s digital perimeter.
To find out more about Secure CDN, visit our website. | <urn:uuid:66274900-ef67-41c8-ae91-331ecee02dbb> | CC-MAIN-2022-40 | https://www.link11.com/en/blog/cyber-security/what-is-a-secure-cdn-and-how-does-it-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00390.warc.gz | en | 0.923127 | 1,544 | 2.53125 | 3 |
Chapter 3: What Is SPF?
Introduction to SPF
The Sender Policy Framework (SPF) is another widely used method of email authentication to prevent spammers from utilizing a domain for spam emailing. The framework publishes an SPF record to the DNS, i.e., a list of the IP addresses authorized to use your domain name for email. It also points out the unauthorized senders who cannot use your domain name.
SPF DNS Record Syntax Explained
A typical SPF record in the DNS looks like the following:
v=spf1 ip4=192.0.2.0 ip4=192.0.2.1 include:examplesender.email -all
The SPF DNS method employs a list of 8 mechanisms that differentiate authorized email senders from unauthorized ones.
- all: This mechanism is at the end of the SPF record and matches all the senders.
- ip4: This mechanism allows IP addresses of the IPv4 network range of a pre-specified list to send emails using a given domain name.
- ip6: This mechanism is similar to ip4 but works on the IPv6 network range.
- a: When this mechanism is used, the IP address should strictly match the SPF DNS record unless a prefix length is provided. When the prefix length is provided, the system searches all the IP addresses for that prefix length.
- mx: In the case of this mechanism, the entire list of records is tested in the order of specified priority.
- ptr: The hostnames are validated using PTR queries. The invalid hostnames are rejected, while the valid ones are matched.
- exists: This mechanism utilizes an A query based on which the existing IP addresses are validated and approved.
- include: This mechanism searches the domain for a match. If a match is not found, it forwards the list for further processing.
Each of the mechanisms can use any one of the four qualifiers:
- + (Pass)
The Pass qualifiers list the domain-authorized email sender.
- – (Fail)
The Fail qualifier lists the unauthorized senders.
- ~ (SoftFail)
The SoftFail qualifier gives the list of the in-transition unauthorized senders.
- ? (Neutral)
The Neutral qualifier is used to mark the questionable senders.
While the DNS processing is ongoing, a temporary error may be represented by the qualifier’ TempError.’ In contrast, a syntax or evaluation error is notified by ‘PermError.’ In the cases where the domain has not created the record yet, the qualifier ‘None‘ is observed.
What Are SPF Tags?
The eight SPF mechanisms that perform different types of functions as per the SPF DNS record are also known as SPF Tags. Apart from these eight, the tag “v” is utilized to represent the protocol version.
Are There Any Downsides To Using SPF?
Using SPF can sometimes be disadvantageous too. Below are a few drawbacks of using SPF.
- Email Forwarding: When an email sent from an authorized IP address is forwarded, the IP address of the person forwarding the email won’t be recorded.
- End-User Discretion: Attackers might build a domain similar to yours. Since the end-users do not check the Return-Path/mailform domain, they might fall victim to phishing attacks from such fake domains.
- Third-Party Vendors: Domain owners depend on third parties that use their domain names. Therefore, there is a constant need to continuously update the SPF record list, which can be inconvenient.
- Limited DNS Lookup: A single SPF record allows checking only 10 DNS lookups.
Creating An SPF Record
Make sure to follow the below instructions while creating an SPF record.
- Make a record of the list of authorized IP addresses.
- Create SPF records for all your domains, including those that do not send emails. The practice helps you avoid any instances of spoofing in case an attacker tries to use the domains that are not used to send emails.
- Create your SPF record with the help of the 8 SPF mechanisms.
- Publish the SPF record with the help of your DNS server admin.
- Do a test run to ensure that the SPF mechanisms are working accurately.
Adding SPF Records For Your Domain
If you are new to SPF, you can utilize the pre-configured SPF record to use the framework. If you want to add your list of SPF records, you can do so by following the steps given below:
- Log in to your Account Control Center.
- Go to ‘Domains’ and then ‘Manage Your Domain Names.’
- Go to the Domain Name to which you want to add your SPF record.
- Go to ‘Manage Custom DNS Records.’
- Next, you will see the option ‘Add DNS Records.’ Click on it.
- It will take you to the section that will allow you to choose the ‘Type of Record’ you want to add. Click on the ‘TXT’ option and then’ Proceed.’
- You will then reach a page with two text boxes, one for Hostname and another for Text Record.
- In the Hostname section, you can write the name of the sub-domain for which you are creating the record or leave the box empty if you want the record to be created for the entire domain. Write your SPF record in the ‘Text Record’ text box, and click on the ‘Create Record’ option.
Note that the process may differ slightly for various hosting providers.
An SPF record can be highly advantageous as it serves as a tool to prevent email spoofing, spamming, and phishing attacks. It is a standard and widely-used email authentication method. Since the SPF record is a simple TXT record, it is easy to create. However, you have to be thorough about the syntax and the correct use and implications of its mechanisms, qualifiers, etc., to avoid errors and make the record work for you in the best possible way. | <urn:uuid:b9b46784-8326-4334-b7f4-48ee836e036c> | CC-MAIN-2022-40 | https://dmarcreport.com/dmarc-fundamentals/what-is-spf/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00590.warc.gz | en | 0.877969 | 1,308 | 3.46875 | 3 |
The Department of Homeland Security (DHS) announced on September 14, 2020 that forced labor goods from China are now banned. The U.S. Customs and Border Protection (CBP) has issued Withhold Release Orders on products produced in China’s Xinjiang province. In a statement Acting CBP Commissioner Mark A. Morgan stated that the forced labor goods banned sends “a clear message to the international community that we will not tolerate the illicit, inhumane, and exploitative practices of forced labor in U.S. supply chains.”
China’s Treatment of Uighurs in Xinjiang
The Uighurs are a native ethic minority in the Xinjiang province of China. Most are Uighurs are Muslims. The Chinese government has detained between 1 to 3 million Uighurs in “re-education” centers to undergo psychological indoctrination programs. This program is considered the largest internment of an ethnic-religious minority since World War II.
Along with torture, forced sterilization, and sexual abuse, Uighurs are subjected to forced labor to produce a number of products. Many of these products are exported worldwide. The Chinese Communist Party has a used forced labor camps since the days of Mao Zedong in 1949.
Australian Strategic Policy Institute Report
An investigation conducted by the Australian Strategic Policy Institute concluded that local governments and private brokers being “paid a price per head” by the Xinjiang government to organize detainment of Uighurs. While the Chinese government claims detention is used to combat religious extremism, many have been detained for praying or wearing a veil. The report called on international companies to conduct a review of their supply chains to ensure human rights are not being violated.
Xinjiang Forced Labor Goods Banned
The CBP has been ordered to withhold the release of the following goods:
- Products made with labor from Lop County No. 4 Vocational Skills Education and Training Center in Xinjiang.
- Hair products made in the Lop County Hair Product Industrial Park in Xinjiang.
- Apparel produced by Yili Zhuowan Garment Manufacturing Co., Ltd. and Baoding LYSZD Trade and Business Co., Ltd in Xinjiang.
- Cotton produced and processed by Xinjiang Junggar Cotton and Linen Co., Ltd. in Xinjiang.
- Computer parts made by Hefei Bitland Information Technology Co., Ltd. in Anhui, China.
These goods are being banned under Section 307 of the Tariff Act of 1930 (19 U.S.C. 1307). This regulation prohibits the importation of all goods and merchandise mined, produced, or manufactured wholly or in part in any foreign country by forced labor, convict labor, or/and indentured labor under penal sanctions, including forced child labor.
China’s Continuing Trade Issues
China has been under increased scrutiny by the international community. This is resulting in increased trade barriers being imposed by many nations including the United States.
- On August 26, 2020 the BIS placed Chinese companies involved in the building of artificial islands in the South China Seas on the Entity List thereby preventing them from receiving U.S. exports.
- In July of 2020 Hong Kong Special Status has been revoked by the Commerce Department due inhumane crack down of dissidents.
- In May 15, 2020 the U.S. restricted Huawei’s access to U.S. semiconductor design and manufacture capabilities.
CVG Strategy Export Compliance
International trade laws are undergoing constant change. This action concerning labor in the Xinjiang region is but one concern. Remaining compliant to laws regarding import and export of goods requires constant vigilance and training. CVG experts can help you establish and maintain an effective compliance program to avoid fines, penalties, loss of business and even imprisonment. We can also provide the essential training to keep your team up to date. | <urn:uuid:f5a1ac95-6d0b-4760-942b-6bf6a0600290> | CC-MAIN-2022-40 | https://cvgstrategy.com/forced-labor-goods-banned/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00590.warc.gz | en | 0.939624 | 791 | 2.515625 | 3 |
In the 1960s, Woodrow W. Bledsoe created a secret program that manually identified points on a person’s face and compared the distances between these coordinates with other images.
Facial recognition technology has come a long way since then. The field has evolved quickly and software can now automatically process staggering amounts of facial data in real time, dramatically improving the results (and reliability) of matching across a variety of use cases.
Despite all of the advancements we’ve seen, many organizations still rely on the same algorithm used by Bledsoe’s database – known as “k-nearest neighbors” or k-NN. Since each face has multiple coordinates, a comparison of these distances over millions of facial images requires significant data processing. The k-NN algorithm simplifies this process and makes matching these points easier by considerably reducing the data set. But that’s only part of the equation. Facial recognition also involves finding the location of a feature on a face before evaluating it. This requires a different algorithm such as HOG (we’ll get to it later).
The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.
While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.
Previous studies on this topic found that facial recognition software suffers from racial biases, but overall, the research on bias has been thin. The consequences of such biases can be dire for both people and companies. Further complicating matters is the fact that even small changes to one’s face, hair or makeup can impact a model’s ability to accurately match faces. If not accounted for, this can create distinct challenges when trying to leverage facial recognition technology to identify women, who generally tend to use beauty and self-care products more than men.
Understanding sexism in facial recognition software
Just how bad are gender-based misidentifications? Our team at WatchGuard conducted some additional facial recognition research, looking solely at gender biases to find out. The results were eye-opening. The solutions we evaluated were misidentifying women 18% more often than men.
You can imagine the terrible consequences this type of bias could generate. For example, a smartphone relying on face recognition could block access, a police officer using facial recognition software could mistakenly identify an innocent bystander as a criminal, or a government agency might call in the wrong person for questioning based on a false match. The list goes on. The reality is that the culprit behind these issues is bias within model training that creates biases in the results.
Let’s explore how we uncovered these results.
Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards.
Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly.
For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time.
Amazon Rekognition correctly identified all pictures we provided. However, when we looked more closely at the data provided, our team saw a wider distribution of the similarities in female faces than in males. We saw more female faces with higher similarities then men and more female faces with less similarities than men (this actually matches a recent study performed around the same time).
What does this mean? Essentially it means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over male if given enough images with faces.
Amazon Rekognition gave accurate results but lacked in consistency and precision between male and female faces. Male faces on average were 99.06% similar, but female faces on average were 98.43% similar. This might not seem like a big variance, but the gap widened when we looked at the outliers – a standard deviation of 1.64 for males versus 2.83 for females. More female faces fall farther from the average then male faces, meaning female false match is far more likely than the 0.6% difference based on our data.
Dlib didn’t perform as well. On average, Dlib misidentified female faces more than male, leading to an average rate of 5% more misidentified females. When comparing faces using the slower HOG, the differences grew to 18%. Of interest, our team found that on average, female faces have higher similarity scores then male when using Dlib, but like Amazon Rekognition, also have a larger spectrum of similarity scores leading to the low results we found in accuracy.
Tackling facial recognition bias
Unfortunately, facial recognition software providers struggle to be transparent when it comes to the efficacy of their solutions. For example, our team didn’t find any place in Amazon’s documentation in which users could review the processing results before the software made a positive or negative match.
Unfortunately, this assumption of accuracy (and lack of context from providers) will likely lead to more and more instances of unwarranted arrests, like this one. It’s highly unlikely that facial recognition models will reach 100% accuracy anytime soon, but industry participants must focus on improving their effectiveness nonetheless. Knowing that these programs contain biases today, law enforcement and other organizations should use them as one of many tools – not as a definitive resource.
But there is hope. If the industry can honestly acknowledge and address the biases in facial recognition software, we can work together to improve model training and outcomes, which can help reduce misidentifications not only based on gender, but race and other variables, too. | <urn:uuid:23a78644-bef2-4fb5-9d19-610cd78ce645> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/08/27/facial-recognition-bias/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00590.warc.gz | en | 0.936951 | 1,422 | 3.421875 | 3 |
The FISA Court and Its Secrets, Government Law
The United States Foreign Intelligence Surveillance Court, also known as FISC or the FISA Court, was established under the Foreign Intelligence Surveillance Act of 1978. The court was created to oversee and manage surveillance requests on surveillance warrants on foreign soil, against foreign spies or agents located inside the United States. The court is sought by federal law enforcement and intelligence agencies in the US to gain legal access to private data on those they are currently investigating.
The court was initially located on the 6th floor of the Robert F. Kennedy Department of Justice Building, from 1978 to 2009. In 2009, the court was moved to the E. Barrett Prettyman United States Courthouse in Washington, DC. The court and the government operate on paid tax dollars from American citizens who expect the court and the authorities to protect their civil rights, anonymity, and humanity as private residents of the United States.
However, in 2013, a top-secret order was issued by the court. The order required Verizon’s subsidiary to provide a daily record of all calls from private citizens in the United States. This order included ordinary domestic calls made by residents. All records were then forwarded to the NSA. Knowing lines were crossed within a secretive court to spy on all Americans, what are your feelings on the Patriot Act now? Was it done to keep us safer or manipulate a situation to document, database, record, and violate Americans’ civil rights and private data?
Before someone can spy on our phone conversations or even those who may be criminal actors representing foreign countries, a warrant must be issued. Wiretapping is a federal offense that is enforced by the FBI or the Attorney General’s office. It is a federal crime to wiretap or use a machine to record others’ communications unless the court grants permission. Each warrant issued requires that a request be submitted before an individual judge. In exceptional circumstances, the court allows third parties to apply for a warrant as an amicus curiae. This literally means ‘friend of the court.’
The other way for a warrant to be issued is under an emergency. If the Attorney General determines an emergency, the AG can issue emergency electronic surveillance without going through the court. Within seven days, the AG must get approval through the court.
It is against the law for a representative to reapply for the same electronic surveillance warrant with a different judge if they have been previously turned down. However, there is the right to appeal via the United States Foreign Intelligence Surveillance Court of Review.
Generally speaking, when a request is made to the court, it is granted. It is sporadic for a FISA warrant to be turned down by the court. From 1979 to 2004, for over 25 years – 18,746 applications were submitted before the court. Only four were ever rejected. These four requests once they had appealed their case before the US FISC of Review, the court partially granted all four proposals. The data shows a 0.03 percent of total applications that are ever rejected by the court.
Secrets and Confidences
Over time, there have been accusations of secrecy and particular confidentiality supplied by the court. Due to the court’s nature, for national security and other significant reasons, the court hearings are closed to the public. At this time, when the FISA Court convenes, no one outside of the required attendees knows which area of the E. Barrett Prettyman Courthouse they are using. Does this secrecy help or harm our nation? It can be argued that collecting data secretly on foreign spies on American soil is done for the betterment and safety of our country. However, how can that still be the argument when everyday citizens turn on the secrecy and collection of data?
Several things over the past twenty years have come to light. Attorney General John Ashcroft was rebutted in May of 2002. The court released an opinion that the FBI and Justice Department had brought false allegations to the court. Court officials went so far to say that they purposely “supplied erroneous information to the court” in at least 75 applications for surveillance. FBI Director Louis J. Freeh personally signed one of the applications. In 2003, the court began to require more stringent modifications and evidence to be presented with the applications.
The New York Times shocked Americans with its article in December 2005. It reported that the Bush administration had been conducting mass surveillance against American citizens. He had not obtained specific approval for the wiretapping from the FISA court for these cases since early 2002. Wiretapping by any citizen without going through proper channels is a federal crime, even for the land’s highest office. After the information went public, Judge James Robertson resigned in protest four days later.
Judge James Robertson went on even further in expressing his outrage. In 2013 when Edward Snowden leaked the information to the public regarding the extent of the wiretapping, Judge Robertson criticized the court-sanctioned expansion of citizens’ surveillance. He also noted and expressed great concern that the court was allowed to create a secret body of law. The government had pushed to circumvent the court’s rules before 2003 when the court began demanding more modifications of warrant requests.
Are these secrets of data collection among American citizens enough to cause you concern yet? Much more, ‘We the People’ do not know about what the court is doing with its secretive databases. The Obama administration in 2011 under secrecy won permission from the court to reverse all restrictions on the NSA’s use of intercepted phone calls and emails. After this stunning move by the administration to further control US citizens’ data, it now allows the NSA and other federal agencies to deliberately search American citizens’ communication files within its massive databases.
It was not just the highest office of the land, Presidents, who has authorized the spying on residents for no cause. In 2008, Congress authorized section 702 of the Foreign Intelligence Surveillance Act to endorse searches that take place under their databases. However, the law states that the target must be a foreigner that is believed to be outside of the United States, and the court must approve surveillance procedures for one year at a time. This may sound like it is in the best interest of National Security, but is it?
Section 702, once enacted, meant that a warrant for each individual target is no longer required. This enactment means that all communications with Americans could be picked up without the court first deciding if there is probable cause. It does not matter if the individuals were considered spies, terrorists, or even “foreign powers.” To further degrade the trust between Americans and its federal government, the FISA court moved in secrecy to extend the length of time that the NSA can retain intercepted US communications. The deletion date for records moved automatically from five to six years and allowed for an extension if needed for foreign intelligence or counterintelligence purposes. The court did this with no public knowledge or even getting authorization from Congress.
Who’s in Charge?
With all the spying going on to record American citizens’ lives and conversations, precisely who is to blame? Who is in charge of allowing this turn of intelligence collection against Americans? Abuse of power comes in many forms and different political parties. As mentioned above, many actions were taken directly by US Presidents, both Democratic and Republican. The Attorney General of either party has sanctioned such moves against the people. However, what has been happening more often than not is that the court is doing whatever it wants and disregarding the law or getting approval from the administration.
How can this happen? This illegal wiretapping is happening because the court sits ex parte. This means the court is operating in the absence of anyone but the judge and government party making the request. If the government makes a request 99.97% of the time, the court grants the surveillance warrant. Many people, both private and in government, have accused the court of being a kangaroo court that rubber-stamps anything put in front of them. This particular accusation of being a kangaroo court with a rubber stamp was noted by a National Security Agency analyst, Russ Tice.
FISA court President Reggie B. Walton rejects the idea entirely, as he stated in a letter to Senator Patrick J. Leahy. “The annual statistics provided to Congress by the Attorney General … – frequently cited to in press reports as a suggestion that the Court’s approval rate of application is over 99% – reflect only the number of final applications submitted to and acted on by the Court. These statistics do not reflect that many applications are altered to prior or final submission or even withheld from final submission entirely, often after indicating that a judge would not approve them. There is a rigorous review process of applications submitted by the executive branch, spearheaded initially by five judicial branch lawyers who are national security experts and then by the judges, to ensure that the court’s authorizations comport with what the applicable statutes authorize.”
How can this be true if the court has only denied 11 requests in the past 35 years? With a cloak of secrecy and a consistent approval of proposals, how are average Americans, who are the ones being spied on, going to trust that this isn’t a farce? The General Council of Office of the National Security Agency, Robert S. Litt, also defended the court’s record. “When the Government prepares an application for a section 215 order, it first submits to the FISA court what’s called a “read copy,” which the court staff will review and comment on. They will almost invariably come back with questions, concerns, problems that they see. And there is an iterative process back and forth between the Government and the FISA court to take care of those concerns so that at the end of the day, we’re confident that we’re presenting something that the court will approve. That is hardly a rubber stamp. It’s rather severe and extensive judicial oversight of this process.”
While most Americans would disagree, feeling that this type of secrecy and spying combine to make a terrifying sense of the government against its people. In 2003 a Senate Judiciary Committee agreed. An Interim Report on FBI Oversight in the 107th Congress by the Senate Judiciary Committee: FISA Implementation Failures was produced condemning the “unnecessary secrecy” of the court. This criticism of the court and government overstepping their positions was clear and its most important conclusion.
The report stated that the ‘secrecy’ maintained by the court had overstepped its boundaries. “The secrecy of individual FISA cases is certainly necessary, but this secrecy has been extended to the most basic legal and procedural aspects of the FISA, which should not be secret. This unnecessary secrecy contributed to the deficiencies that have hamstrung the implementation of the FISA. Much more information, including all unclassified opinions and operating rules of the FISA Court and Court of Review, should be made public and/or provided to the Congress.”
Protecting Citizens and Their Data
No American wants to feel like there is a target on their back from a government they support with their tax dollars. The ACLU’s deputy legal director, Jameel Jaffer, stated that safeguards for Americans had been lost. “In light of revelations that the government secured telephone records from Verizon and Internet data from some of the largest providers that safeguards that are supposed to be protecting individual privacy are not working.”
The Guardian criticized that no legislation had been made, and the allowances given to the court did not protect citizens. “The broad scope of the court orders, and the nature of the procedures set out in the documents, appear to clash with assurances from President Obama and senior intelligence officials that the NSA could not access Americans’ call or email information without warrants.”
The NSA fought back, defending their constitutionality and their procedures to minimize data collection from Americans. The court approved the following guidelines for the use and discretion of data for the NSA.
- “keep data that could potentially contain details of US persons for up to five years;
- retain and make use of “inadvertently acquired” domestic communications if they contain usable intelligence, information on criminal activity, the threat of harm to people or property, are encrypted, or are believed to include any information relevant to cybersecurity;
- preserve “foreign intelligence information” contained within attorney-client communications; and
- access the content of communications gathered from “US-based machine[s]” or phone numbers to establish if targets are located in the US, to cease further surveillance.”
Data Retention and Release
Despite which way we land on deciding the constitutionality of Americans’ wiretapping, the NSA keeps databases on citizens. Like any other corporation, what privacy regulations do they follow? How do they protect the data? Who determines the redaction process? Some of the guidelines above seem a bit like fluff compared to the severe nature of wiretapping and violating Americans’ trust.
They are no more stringent than the concept of privacy rules and regulations that many companies face today. There is always a set “deletion date” for data, and encryption is used to protect the integrity of any data kept. Could we ask ourselves why the court-approved rules were not more stringent when dealing with the critical implications that this data could hold?
When files are released, if and when, they are redacted to the point of being nearly illegible. What methods of redaction and encryption is being used at the NSA? America spent 52.6 billion dollars on cyber-defense in a given year. How much of this tax payor dollar amount was used to protect the United States residents against being abused by its own “secret courts?”
Here we are in 2020, and just last week, more information was released about a ruling that occurred in December 2019. The FISA court itself found other agencies in violation of the private data of citizens. The court ruled “that the FBI had committed “widespread violations” of rules intended to protect Americans’ privacy when analysts search through a repository of emails gathered without a warrant, but it nevertheless signed off on another year of the program.” So, American’s private data is being collected, could be abused, or used in a court to imprison them – the court agrees it is wrong – yet signed off to allow it to continue for at least another year. Do you feel protected or violated?
Another court has also stepped in to rule that the program that was being used by the NSA, which collected bulk logs of domestic phone calls, was illegal. However, it stopped short of overturning any convictions, even if wrongful, of those accused of crimes through the data collection process. Section 702, which allows for warrantless wiretapping, has yet to be ended by the courts or Congress.
In fact, in 2018, Congress reauthorized its use, putting more stringent rules requiring that the court approves how the analysts can query the data. They did not require a more rigorous ruling defining who could be wiretapped without a warrant, only on how the database can be searched. Data collection of personal information of Americans is still its highest priority. | <urn:uuid:f2e69166-0236-4757-9cc1-bc747ccf7b70> | CC-MAIN-2022-40 | https://caseguard.com/articles/the-fisa-court-and-its-secrets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00590.warc.gz | en | 0.959654 | 3,115 | 3.171875 | 3 |
Animal tagging refers to the practice of putting an identifying marker on an animal and tacking refers to keeping track of its location and imparting information to a casual viewer (for example, whether it has an owner or has been neutered).
The practice has evolved, however, from branding and ear tipping to attaching high-tech GPS sensors to animals. Both methods however are to track real-time behavior of the animal. This category refers mainly to the tracking of wild animals for research or conservation efforts.
Animal tagging data comes from humans attaching tags or sensors to animals. When researchers tag an animal, they record its species, location, and condition. Later, researchers record updates through ground-based sensors or through re-capturing the animal for physical observation.
GPS sensors, on the other hand, allow for animals to remain free of human capture past the first installation, which is preferable. These sensors transmit information about an animal to data feeds via radio or satellite.
The number and type of data recorded through animal tagging is vast. Researchers start with the species, age, tagging location, individual animal ID, and a contact person. Then, at least for the newer satellite and radio-communication trackers, researchers record real-time movement data.
Additional data may include acceleration speed, altitude, terrestrial magnetism (to record the exact location of the animal in place of GPS), even barometric or hydrometric pressure. In other words, the recording device can be tailored to the type of animal being studied and for the data required.
The many uses of animal tracking data include animal welfare, species conservation, and animal research purposes. It is also used to support anti-theft, anti-poaching, and anti-smuggling efforts.
To ensure the quality of your animal tagging data, be sure that your sensors or tags are in good condition before use. Also crucial is the update frequency of your data feed: make the data recorded as thorough and as close to real-time as possible.
In this data category, you may also find it very useful to physically go out into the field to capture, re-capture (if applicable), and just physically observe the wildlife.
Orbcomm’s satellite connectivity is critical to enabling AWT to track and monitor animals in their natural habitats. Orbcomm’s IoT devices can withstand complex environments, extreme weather conditions and tough terrain, which is often dusty, muddy and covered with dense forests. In addition, Orbcomm’s devices are highly reliable in the field, which is extremely important given the extensive costs, resources and logistics involved in putting tags and collars on the animals AWT tracks.
AER – Earth provides research & studies on everything interconnected with our planet
Sustain Planet Earth Committed™ is based on the SDG of the United Nation for small and mid-enterprises.
EarthImages is a web-based platform for searching, comparing and accessing imagery | <urn:uuid:98e1d4f1-c8dd-46b7-b1eb-3fbdcf07f7f9> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/conservation-data/animal-tagging-wildlife-tracking-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00590.warc.gz | en | 0.932388 | 600 | 3.359375 | 3 |
Uninterruptible power supply (UPS) systems are generally thought of as insurance policies for companies and institutions with critical power requirements such as hospitals, research facilities, laboratories, data centers, manufacturers, healthcare, government, academic, research, and transportation facilities, providing reliable power supply.
Using UPS systems as more than emergency backup – and monetizing their use – makes a compelling proposition. Seeing these systems as assets and new revenue generators – with no risk to backup capabilities – introduces a new, strategic way of thinking about UPS capabilities.
At present, the most common method of providing backup power is the use of generators with UPS batteries. This method bridges the gap between the power interruption and the point in time when the generators produce a stable power supply. This is the traditional model that protects those with critical power requirements from grid failures. Typically, it can take between a few seconds to a few minutes for a generator to reach appropriate production levels. If a generator is not in place, a longer battery backup solution will be needed to bridge the time until grid power is resumed.
However, grid failure or grid interruption aren’t the only factors that need to be considered by energy users; there are a wide variety of commercial implications to think about, too. First, there are variations in tariffs throughout the day: power from the grid at peak times is more expensive. Secondly, in some countries, rates charged are based on the maximum consumption in a given period. For example, a manufacturer on a five-day week might have a disproportionate spike in power usage when operating multiple machines or devices at the same time, or when using a high power device infrequently. Power usage will be several times higher than the maximum consumption level for the rest of the week, but that increase in power consumption, even for a short period of time, will determine the tariff rate for the entire period.
There is a way of ameliorating these challenges by using the existing UPS systems that are already installed. New, specialized software allows energy to be stored when chargers are less expensive - to be used in place of grid power at times when charges are higher. This can be done automatically as part of normal operations whenever surplus battery capacity is available, while still ensuring that sufficient capacity is preserved for emergency backup if required.
Similarly, it is possible to draw energy stored in UPS batteries during low usage periods to supply extra peak power when needed, thus reducing or eliminating predictable spikes in consumption and reducing the overall tariff.
In addition to this, UPS batteries can be used to provide additional power for short periods of time in instances where energy cannot be sourced from the grid. Consider the case of a hospital that needed to install a new scanner. The inrush power requirement of the scanner was in excess of what the grid connection could provide, though its post-start-up operation was within the available capacity. The hospital’s location also made it unfeasible to upgrade the energy supply. This is quite a common problem in cities around the world where infrastructure tends to be stressed.
With the new model of UPS application, the hospital can draw on its UPS power in the scanner’s inrush phase to complement the grid supply until energy demand falls. Use-case scenarios such as these extend the limits of grid connection and enable the user to have access to more power than the grid can supply, while not taking away from the UPS system’s emergency functionality.
Adding solar to the mix
The next step in this evolution is to combine the increased capabilities of UPS systems with a renewable energy source.
DC-optimized string inverter solutions can use Power Optimizers placed directly onto solar modules, turning them into smart modules in order to maximize solar power generation and to monitor system performance at a module level. Unlike traditional string inverter systems, when the performance of some modules becomes impaired (due to common issues such as shading, module mismatch or soiling), the rest of the system will continue to produce the maximum amount of energy. Further, modules can be placed at any tilt or orientation and in uneven string lengths. This allows more modules to be placed on the roof for more power.
Many companies and institutions with critical power requirements have already installed some level of solar energy generation as part of their wider carbon reduction goals in order to reduce energy costs. When the grid is on, solar power is used to supplement grid energy for operations and to charge UPS batteries. But what happens when the grid is down?
Companies may not realize that when this occurs, solar inverters need to be isolated from the grid, which can result in lost energy production. However, there are solutions that manage to overcome this issue. For example, SolarEdge’s UPS backup solution includes hardware that isolates the inverters from the grid to maintain solar energy production while the grid is down, effectively creating a micro-grid.
UPS systems can also be utilized to help organizations improve their self-consumption of solar power. Energy usage does not always align with the energy generation of a PV system. As such, in order to overcome this inconsistency, energy can be stored in a battery for consumption at a later time instead of either limiting energy production or feeding it into the grid. Depending on which state you live in, this tactic of feeding excess energy back to the grid could add to your monetary gains from UPS & PV systems, further decreasing the ROI.
One way to achieve this is with a stand-alone storage system. However, it might be more cost-effective to add extra batteries to the existing UPS system and store the energy there instead. By adding batteries to the UPS system, this otherwise wasted energy can be utilized at a lower cost than adding a separate storage system. In this way the UPS system acts as a hybrid system manager.
Crucially, this use of solar energy and batteries does not add risk to an organization’s UPS provision. This is because the energy levels reserved for critical power are automatically monitored, regulated, and preserved. Beyond these requirements, using surplus solar energy can cut costs without adding risk: it maximizes self-consumption when the grid is on and provides backup power capabilities when the grid is down.
The integration of flexible PV and UPS solutions changes the whole dynamic of working with energy suppliers and using the grid.
An integrated PV and UPS system can add value and reduce costs, on top of providing users with energy protection. Longer backup times can be achieved, and the flexibility of allocating batteries to the solar and/or UPS sides of the system can deliver further efficiencies and savings, transforming a backup solution from a necessity to an asset.
The impact on critical power
By joining UPS and PV solutions together, data center operators can improve the use of existing UPS resources, allowing users to reduce energy costs while also benefiting from uninterrupted power supply and battery backup. Full-integration of the solar PV system with existing UPS provision provides higher efficiency and further reduced costs.
Those planning to install or renew a UPS system will always inquire about cost, and adapting to this new ‘integrated’ vision requires a new perspective. However, with a fully-integrated solar+UPS solution, ROI actually enters the conversation, which is typically not the case with traditional UPS systems.
Critical power is, and will always be, essential for certain organizations and institutions. As renewable energies, particularly solar energy, become a larger part of the wider energy mix, the vast potential it brings when combined with critical power applications, in terms of financial investment, uninterrupted operations, and of course sustainability objectives, can no longer be ignored. | <urn:uuid:cfb913eb-d748-4d69-9f07-50d8b41e4571> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/industry-perspectives/combining-increased-capabilities-ups-systems-renewable-energy-source | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00590.warc.gz | en | 0.930551 | 1,543 | 3.0625 | 3 |
OM3 and OM4 are two common types multimode fiber used in local area networks, typically in backbone cabling between telecommunications rooms and in the data center between main networking and storage area network (SAN) switches. Both of these fiber types are considered laser-optimized 50/125 multimode fiber, meaning they both have a 50μm micron diameter core and a 125μm diameter cladding, which is a special coating that prevents light from escaping the core. Both fiber types use the same connectors, the same termination and the same transceivers—vertical-cavity surface emitting lasers (VCSELs) that emit infrared light a 850 nanometers(nm). OM3 is fully compatible with OM4. With so many similarities, and often manufactured with the same color aqua cable jacket and connectors, it can be difficult to tell these two fiber types apart. So, what’s the difference between OM3 vs OM4? Do these two types fiber refer to the same thing?
In fact, the difference between OM3 vs OM4 fiber is just in the construction of the fiber optic cable. The difference in the construction means that OM4 cable has better attenuation and can operate at higher bandwidth than OM3. What is the reason of this? For a fiber link to work, the light from the VCSEL transceiver much have enough power to reach the receiver at the other end. There are two performance values that can prevent this—optical attenuation and modal dispersion.
Attenuation is the reduction in power of the light signal as it is transmitted (dB). Attenuation is caused by losses in light through the passive components, such as cables, cable splices, and connectors. As mentioned above the connectors are the same so the performance difference in OM3 vs OM4 is in the loss (dB) in the cable. OM4 fiber causes lower losses due its construction. The maximum attenuation allowed by the standards is shown below. You can see that using OM4 will give you lower losses per meter of cable. The lower losses mean that you can have longer links or have more mated connectors in the link.
Maximum attenuation allowed at 850nm: OM3 <3.5 dB/Km; OM4 <3.0 dB/Km
Light is transmitted at different modes along the fiber. Due to the imperfections in the fiber, these modes arrive as slightly different times. As this difference increases you eventually get to a point where the information being transmitted cannot be decoded. This difference between the highest and lowest modes is known as the modal dispersion. The modal dispersion determines the modal bandwidth that the fiber can operate at and this is the difference between OM3 and OM4. The lower the modal dispersion, the higher the modal bandwidth and the greater the amount of information that can be transmitted. The modal bandwidth of OM3 and OM4 is shown below. The higher bandwidth available in OM4 means a smaller modal dispersion and thus allows the cable links to be longer or allows for higher losses through more mated connectors. This gives more options when looking at network design.
Minimum Fiber Cable Bandwidth at 850nm: OM3 2000 MHz·km; OM4 4700 MHz·km
Since the attenuation of OM4 is lower than OM3 fiber and the modal bandwidth of OM4 is higher than OM3, the transmission distance of OM4 is longer than OM3. Details are shown in the table below. According to your network scale, to choose a more suitable cable type.
|OM3||2000 Meters||550 Meters||300 Meters||100 Meters||100 Meters|
|OM4||2000 Meters||550 Meters||400 Meters||150 Meters||150 Meters|
Since OM4 performs better than OM3 cables, usually, OM4 cable is about twice as expensive as OM3 cable. This may be a big limited factor of OM4 cables’ application. However, if you choose to shop in Fiberstore, you may get much cheaper OM4 fiber nearly the same as the OM3 fiber. Price of different types OM3 and OM4 cables in Fiberstore is listed in the table below:
|Fiber Type||3m Standard LC duplex||3m Armored LC duplex||3m HD LC duplex||3m Standard MTP|
|OM3||US$ 3.30||US$ 7.20||US$ 22.00||US$ 49.00|
|OM4||US$ 4.00||US$ 8.00||US$ 24.00||US$ 54.00|
Either OM3 or OM4 cable can satisfy your unique cabling needs. Just choose the most suitable one for your network to cost less and achieve more.
Related Article: OM3 OR OM4 Cable Which One Do You Need?
Related Article: Multimode Fiber Types: OM1 vs OM2 vs OM3 vs OM4 vs OM5 | <urn:uuid:dd610e3f-be3b-4f0d-9c9b-a1b1611223d6> | CC-MAIN-2022-40 | https://www.cables-solutions.com/tag/om3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00590.warc.gz | en | 0.91221 | 1,076 | 2.90625 | 3 |
Suppose you want to know about the prevalent type of cybersecurity attack, then the first name comes in mind in man-in-the-middle attacks. When you are reading this article, we are pretty much sure that you might have heard its name but you does not have complete idea about it.
MiTM is one type of attack that takes place within two legitimate communicating hosts which allow a hacker “listen to” the conversation, and they will have access to listen; that is the reason they are called “man-in-the-middle”.
- Users will get a kind of session hijacking.
- This involves attacker to insert themselves as the proxies going on with the legitimate data transfer or conversation.
- It exploits the real-time nature so that transferred data will not get deleted.
- It allows the attacker to intercept confidential data.
- Through this attacker can insert the malicious data, and it gets links indistinguishable way from the legitimate data.
Man-in-the-middle Attack Types:
There are five types of attacks, those are discussing below:
- Email Hijacking: Attacker has access to the user’s email account and will watch the transaction from that particular account. For example, users exchange funds with the other party and attacker will take advantage of that situation by intercepting the fund by spoofing.
- Wi-Fi Eavesdropping: This is a very passive way to deploy the MITM attack, which involves hackers setting the public Wi-Fi connection with an unsuspecting name so that they gain access to the victim’s device as soon as they get connected with it some other malicious Wi-Fi.
- Session Hijacking: In this, hack gains access for an online session through the stolen session key or few browser cookies.
- DNS Spoofing: The attacker gets to engage with the DNS spoofing by changing the website’s address record within their domain name. The victim will unknowingly visit some fake site where the attacker will try to steal their information.
- IP Spoofing: This is similar to DNS spoofing where the attacker attempts to divert the traffic to other fraudulent websites with other malicious intent. Instead of spoofing, the attacker disguises the internet protocol address.
Ethical Hackers Academy
Learn Advanced Network AttacksLearn More
How to Prevent Man-in-the-Middle Attacks?
Down you will get few ways to prevent this type of attack; those are discussing below:
- Implement a comprehensive Email Security Solution: This is an email security solution that is vital for any organisation’s security. It also helps to minimize the risk solution which is associated with the MITM. It also secures all the emails actively when staff focuses the effort elsewhere.
- Web Security Solution: This attack can become challenging for an attacker if you make a strong web security tool, which provides visibility so that it can generate traffic for both the end-user and system in the same protocol and port layers. Similarly, an email security tool implements the protection for your organization’s web traffic to cover the security team.
- Educate Employees: You need to provide the relevant training for all employees to identify before the attack happens because they will know its dynamics, samples, pattern, and frequency of attack method. You can even give them some case studies that can be all together education material to work as an awareness program.
- Keep credentials secure: As a business owner, you always need to check user credentials very often. You need to make sure that all the passwords are safe, complex, and updated everywhere. You need to keep on updating a minimum of every three months. This will help you keep your company protected and keep your credential fresh so that it becomes more challenging to crack.
Future of MITM Attacks:
MITM is a very useful tool for the attacker where the attacker can continue to intercept important data, including password and credit card number. It makes the race between the network provider and software developer close the vulnerability attack so that the attacker can exploit and execute the MITM.
You can take a massive proliferation where loT device can maintain the security standards, and it has the same capability like other devices; this will make things more vulnerable like MITM attack. Attackers need to use the organisation’s network to move a few other techniques where they get the new fancy internet-capable thermostat that works as a security hole.
MITM will make the wireless network adoption which includes 5G network. Attackers use the opportunity to steal the data by using MITM to infiltrate the organisation. Few incumbent wireless companies which fix the vulnerability to provide the secure backbone for the device and users.
In this technology era, there are many devices which is connected with the different type of network, this directly states that attacker will have more opportunity to use MITM technique.
Finally, we have completed giving the brief about MITM, and we can hope that this concept will help for you to make the correct future decision. | <urn:uuid:53f4e346-5dba-4157-bbd2-e5bfe7d5b1e9> | CC-MAIN-2022-40 | https://ethicalhackersacademy.com/blogs/ethical-hackers-academy/man-in-the-middle-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00590.warc.gz | en | 0.933847 | 1,043 | 3.4375 | 3 |
On any Software Engineering team, a pipeline is a set of automated processes that allow developers and DevOps professionals to reliably and efficiently compile, build, and deploy their code to their production compute platforms.
There is no hard and fast rule stating how a pipeline should look and the tools it must utilise. However, the most common components of a pipeline are:
- Build automation/continuous integration
- Test automation
- Deploy automation
A pipeline generally consists of a set of tools which are normally broken down into the following categories;
The key objective of a Software Delivery Pipeline is automation with no manual steps or changes required in or between any steps of the pipeline. Human error can and does occur when carrying out these boring and repetitive tasks manually and ultimately does affect the ability to meet deliverables and potentially SLAs due to botched deployments.
A Deployment pipeline is the process of taking code from version control and making it readily available to users of your application in an automated fashion. When a team of developers are working on projects or features they need a reliable and efficient way to build, test and deploy their work. Historically, this would have been a manual process involving lots of communication and a lot of human error.
The stages of a typical deployment pipeline are as follows.
Software Developers working on their code generally commit their changes into source control (e.g. GitHub). When a commit to source control is made a the first stage of the deployment pipeline is started which triggers:
- Code compilation
- Unit tests
- Code analysis
- Installer creation
If all of these steps complete successfully the executables are assembled into binaries and stored into an artefact repository for later use.
Acceptance testing is a process of running a series of tests over compiled/built code to test against the predefined acceptance criteria set by the business.
An independent deployment is the process of deploying the compiled and tested artefacts onto development environments. Development environments should be (ideally) a carbon copy of your production environments or very similar at worst. This allows the software to be functionally tested on production like infrastructure ready for any further automated or manual testing.
This process is normally handed by the Operations or DevOps team. This should be a very similar process to independent deployments and should deliver the code to live production servers. Typically this process would involve either Blue/Green deployments or canary releases to allow for zero down time deployments and easy version roll backs in the event of unpredicted issues. In situations where there are no zero down time deployment abilities release windows are normally negotiated with the business.
Continuous Integration & Continuous Delivery Pipelines
Continuous Integration (CI) is a practice in which developers check their code into a version controlled repository several times per day. Automated build pipelines are triggered by these check ins which allow for fast and easy to locate error detection.
The key benefits of CI are:
- Smaller changes are easier to integrate into larger code bases.
- Easier for other team members to see what you have been working on
- Bugs in larger pieces of work are identified early making them easier to fix resulting in less debugging work
- Consistent code compile/build testing
- Fewer integration issues allowing rapid code delivery
Continuous Delivery (CD) is the process which allows developers and operations engineers to deliver bug fixes, features and configuration changes into production reliably, quickly and sustainably. Continuous delivery offers the benefit of code delivery pipelines that are routinely carried out that can be performed on demand with confidence.
The benefits of CD are:
- Lower-risk releases. Blue/Green deployments and canary releases allow for zero downtime deployments which are not detectable by users and make rolling back to a previous release relatively pain free.
- Faster bug fixes & feature delivery. With CI & CD when features or bug fixes are finished, and have passed the acceptance and integration tests, a CD pipeline allows these to be quickly delivered into production.
- Cost savings. Continuous Delivery allows teams to work on features and bug fixes in small batches which means user feedback is received much quicker. This allows for changes to be made along the way thus reducing the overall time and cost of a project.
Utilisation of a Blue/Green Deployment process reduces risk and down time by creating a mirror copy your production environment naming one Blue and one Green. Only one of the environments is live at any given time serving live production traffic.
During a deployment, software is deployed to the non-live environment – meaning live production traffic is unaffected during the process. Tests are run against this currently non-live environment and once all tests have satisfied the predefined criteria traffic routing is switched to the non-live environment making it live.
The process is repeated in the next deployment with the original live environment now becoming non-live.
Different from Blue/Green deployments, Canary Deployments do not rely on duplicate environments to be running in parallel. Canary Deployments roll out a release to a specific number or percentage of users/servers to allow for live production testing before continuing to roll out the release across all users/servers.
The prime benefit of canary releases is the ability to detect failures early and roll back changes limiting the number of affected users/services in the event of exceptions and failures.
Tools for automating software quality
There are many different tools that you can use to build CI/CD pipelines outlined below, all of which can be used to build reliable and robust CI/CD pipelines with the added bonus of being able to get started for free!
In summary, CI is the automated process to enable software development teams to check in and verify the quality and ability to compile of their code. CD allows Development and Operations teams to reliably and efficiently delivery new features and bug fixes to their end uses in an automated fashion.
- BMC DevOps Blog
- DevOps Guide, a series of 30+ articles on DevOps
- Continuous Delivery vs Deployment vs Integration: What’s the Difference?
- Deploying vs Releasing Software: What’s The Difference?
- Testing Automation Explained: Why & How To Automate Testing
- Shift Left Testing: What, Why & How To Shift Left | <urn:uuid:100cbc2e-5f57-4d3f-bbd4-841da7b9f757> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/deployment-pipeline/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00590.warc.gz | en | 0.93508 | 1,297 | 2.765625 | 3 |
It has been revealed that Snapchat has suffered a cyber attack resulting in over 55,000 users being exposed. The phishing attack tricked users into handing over their credentials including passwords, which eventually made their way onto a public website. IT security experts commented below.
Javvad Malik, Security Advocate at AlienVault:
“Attacking the human through phishing or other techniques has remained a constant attack vector over the years. While Snapchat is predominantly a consumer tool, if users have reused credentials, it is possible they can be used to attack corporate accounts. Therefore, user awareness is essential to protect attacks against both business and personal apps. Furthermore, businesses should invest in controls to be better placed in order to better detect and respond to such attacks when they occur.”
Mark James, Security Specialist at ESET:
“It’s bad enough if you get hacked and someone steals your logon credentials to use elsewhere, but more and more of these “hacks” are no more than being tricked into logging into a website using your actual username and password- this of course is the same as literally handing over your logon credentials to a stranger; they then use your details for their own nefarious purposes. Often these links or websites look very lifelike and in some instances you could be forgiven for being tricked, but there is an easy way to stop this- by using two or multi factor authentication, you could limit any damage caused by being tricked. Yes of course they have your login and password, but being as though you understand the importance of not reusing any password on other sites, they can do nothing with it because they do not have your authenticator!
Whenever someone tries to log in from an unknown device it asks for a code to validate the user, you generate the code using an app and add this after your username and password, thus proving you as the owner; it’s simple, quick and WILL protect your details from thieves or scammers- and it’s free.”
Lee Munson, Security Researcher at Comparitech.com:
“The fact that tens of thousands of Snapchat users have had their credentials swiped is hardly surprising as phishing emails catch out millions of people every year. It’s also unsurprising that someone decided to publish those credentials on the web either – criminals are always looking to make money or cause mischief, one way or another.
The worrying thing about this news, however, is the fact that many of the published email addresses and passwords will have been used for many different accounts, meaning the victims will be at risk of multiple account hijacks, potentially leaving themselves open to identity theft and other types of fraud.
The obvious solution for victims of this phishing campaign is to change their passwords on all accounts where the same credentials have been used, making sure that they then use a different password for every account this time around.
If that sounds like an extremely tricky proposition, especially considering how many accounts everyone has these days, the simple answer is to use a password manager that can both generate and store as many passwords as required.” | <urn:uuid:632d87d0-6fa2-4792-88cb-e55d348815be> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/thousands-snapchat-users-exposed-cyberattack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00590.warc.gz | en | 0.963437 | 641 | 2.53125 | 3 |
Researchers from the University of California-Santa Barbara have created a free spam-filtering service for Twitter that detects malicious accounts and forwards the information to the microblogging service for enforcement.
The service, dubbed Spamdetector, is the brainchild of Ph.D. student Gianluca Stringhini.
“We started studying social network spam one year ago,” Stringhini said. “With the rise of social network sites, this problem became really important. During our studies, we found out that spam bots (which are likely infected computers) have some peculiar behaviors that [differ] from the [ones] legitimate users have.”
For example, spammers are usually very aggressive when it comes to following users in the hopes they can build up a large pool of followers of their own. This point was underscored by a recent analysis by Barracuda Networks, which found 26 percent of users have at least 10 followers and 40 percent are following at least 10 people.
Spammers also send similar kinds of tweets, Stringhini said. But the service is also helped by users submitting information about suspicious accounts.
“The usefulness of users flagging spammers is that leveraging this information we can ‘target’ the crawling to those profiles that send tweets similar to the ones that have already been detected as spam,” he said. “In this way, we are able to detect more spammers in a shorter period of time.”
Twitter security has taken more than a few hits in the past two years as the number of users has soared. In response to the growing abuse of URL shortening services to send malicious links, Twitter recently launched a URL scanning service.
“It is not easy to reliably detect malicious users, because there’s always the risk [of suspending] legitimate accounts by mistake,” the researcher said. “However, the Twitter spam detection improved a lot during the last months. Many spam profiles get deleted even before we flag them to Twitter.”
To sign up for the service, follower the user @spamdetector on Twitter. | <urn:uuid:982ca9b0-ea08-49e3-9563-e5403ebfa0cc> | CC-MAIN-2022-40 | https://www.eweek.com/blogs/security-watch/free-twitter-anti-spam-service-launched/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00790.warc.gz | en | 0.95334 | 438 | 2.796875 | 3 |
Specific requirements are determined by tolerance is specified in test methods being performed and or regulatory requirements depending on the test or calibration sector. The two primary factors in creating an ideal laboratory condition are temperature and humidity. Note that often with the use of air conditioners, there is a drying effect and relative humidity is reduced. Laboratories should be equipped with suitable climate and ventilation control. The laboratory must look at risks – personnel comfort and risk to validity or results. The starting point is that the temperature and humidity must be kept within limits for the proper performance of each test performed and according to the manufacturer’s specifications for the proper operation of equipment.
Certain organisations, such as the FDA in the USA, have guidelines for general conditions. For example, a comfortable working environment is considered 20 to 25 Degrees Celsius with a relative humidity, depending on geographical area, of 35 to 50%. | <urn:uuid:4d0cbbfd-4e40-4016-8f56-376511795be2> | CC-MAIN-2022-40 | https://community.advisera.com/topic/what-is-the-acceptable-range-of-rh-for-a-laboratory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00790.warc.gz | en | 0.933834 | 175 | 3.015625 | 3 |
The U.S. Department of Energy’s (DOE) Office of Science announced that 45 projects were awarded a total of 95 million hours of computing time on some of the world’s most powerful supercomputers. According to a statement from the DOE, this is part of the 2007 Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. DOE’s Under Secretary for Science, Dr. Raymond Orbach, presented the awards at the Council on Competitiveness in Washington, D.C.
The supercomputers will allow cutting-edge research and design of virtual prototypes to be carried out in weeks or months, rather than the years or decades that would be needed using conventional computing systems, say DOE representatives. Of the programs selected, nine are from industry and include five new proposals and four continuations from last year.
Launched in 2003, the INCITE mission is to advance American science and industrial competitiveness. These awards will assist in that mission by supporting computationally intensive, large-scale research projects and awarding them large amounts of dedicated time on DOE supercomputers. The projects, with applications from aeronautics to astrophysics, consumer products to combustion research, were competitively chosen based on the potential impact of the science and engineering research and the suitability of the project for use of supercomputers.
Processor-hours refer to how time is allocated on a supercomputer. A project receiving 1 million hours could run on 2,000 processors for 500 hours, or about 21 days. Running a 1-million-hour project on a single-processor desktop computer would take more than 114 years.
Check out our CIO News Alerts and Tech Informer pages for more updated news coverage. | <urn:uuid:cc4d57ab-4c55-4806-8b23-9220599b9c20> | CC-MAIN-2022-40 | https://www.cio.com/article/264777/enterprise-architecture-putting-the-government-s-bits-to-work.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00790.warc.gz | en | 0.940231 | 362 | 2.734375 | 3 |
Imagine a server with 110 Terabytes of hard disk. That’s roughly what the National Center for Supercomputing Applications, better known as NCSA, has developed.
NCSA, a government-funded research center at the University of Illinois at Urbana-Champaign, has been on the frontiers of computing since it was founded in 1986. NCSA Mosaic, the first graphical Web browser, for example, emerged from its doors in 1992.
The center’s latest creation is a massive storage area network (SAN), which is hooked directly to 256 separate computers. The entire 70 terabytes of disk — slated to grow to 110 terabytes this summer — is made up of one single file system, so each one of the computers can address the entire amount of storage.
That is pushing the envelope of storage technology, says Michelle Butler, NCSA’s technical program manager responsible for storage. “We are on the bleeding edge,” she says. “No where else in the world today can you hook up 256 Linux machines [to this much storage], and have each of them see the exact same data as the machine right next to it.”
Among other things, this provides an enormous performance boost, says Butler. The 256 computers — all Intel-based Linux servers from IBM — are clustered together. Because the storage for the cluster is one single file system, all 256 machines can write to the same file in parallel. “Data gets written to disk 256 times faster,” says Butler, “because they’re all doing it at once.”
The system also gets a speed boost because the disks share data directly over their Fiberchannel connections, rather than going out over a network. NCSA is using Brocade 12000 network switches and storage technology made by LSI, and purchased from IBM.
As if this wasn’t enough storage, later this summer NCSA will be installing another 170 terabytes of storage, which will serve a second cluster of 768 Linux machines.
This huge SAN is part of an even larger experiment in large-scale computing in which NCSA and five other research facilities around the country are creating a massive storage- and compute grid which will have more than 1 petabyte (1 quadrillion bytes) of storage capacity. Called TeraGrid, the grid is being built by NCSA along with supercomputing centers at Carnegie Mellon University, the University of Pittsburgh, the University of California at San Diego, Argonne National Laboratory, and the California Institute of Technology.
A New Type of Science
The high-performance 110 terabyte storage system NCSA is creating is doing more than just pushing the envelope in computer science, says Butler. It’s also enabling a whole new class of research projects by the scientists who use NCSA’s computing facilities.
Supercomputing in the past has focused on CPU-intensive computing, according to Butler. “The supercomputing world has been taught over the years not to do data-intensive tasks, because I/O is slow, so your CPU cycles are sitting there spinning while I/O is going on,” she says.
Now, however, NCSA is starting to see a new class of more data-intensive applications. Researchers trying to predict the weather, understand how black holes work, or do genetic sequencing are starting to use NCSA’s computing power to crunch large quantities of data, says Butler.
So are some of NCSA’s industrial partners, which are commercial companies the research center works with to ensure that its research is meeting the needs of the private sector as well as pure researchers.
One heavy equipment manufacturer, for example, was able to use NCSA’s visualization resources to model the cab of one of its new machines. That led to the realization that design prevented someone sitting in the driver’s from reaching all the levers needed to operate the equipment.
“Having all this storage available is enabling us to do this kind of science, which is not something we could do in the past,” Butler says.
Story courtesy of CIO Update. | <urn:uuid:5d8e9761-73f3-47cf-be0f-1b90b73f2c73> | CC-MAIN-2022-40 | https://www.datamation.com/storage/supercomputer-center-pushes-the-storage-envelope/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00790.warc.gz | en | 0.94478 | 852 | 3.359375 | 3 |
Design and manufacturing expectations keep escalating. There's a heightened consumer desire for variety, customization, automation and technological innovation. Manufacturers across all industries want parts that are more durable, lighter weight, and cost-efficient. Deadlines are being compressed from months to weeks or even days.
Fortunately, 3D printing is addressing these challenges by shortening the prototyping process, stimulating new design innovation, and giving designers and manufacturers the ability to experiment with new product development processes before committing the big production bucks. And, all of it can be done via outsourcing.
3D printing outsourcing takes on many forms and is referred to by different names. It can encompass everything from rapid and functional prototyping to appearance models to low-volume production using both additive manufacturing and traditional subtractive processes.
Broad Range of Options
Even if a company has 3D printers in house, outsourcing selected work can open up options for printing using different materials, processes, properties and colors. These options can be used singularly, in combinations, and/or integrated with traditional processes.
The most commonly used systems for rapid prototyping are based on stereolithography (SLA). Although SLA has been around for more than 30 years, it hasn't stood still. The technology for creating precise and accurate resin-based parts has made huge leaps over the last three decades in areas such as speed, affordability, range of materials, cleanliness, level of detail, texturing, finishing, and overall automation of the 3D printing process. A company might have SLA systems in house, but they might pale in comparison to the latest and greatest at an outside facility.
Selective laser sintering (SLS) has been around about as long as SLA technology, but it too has continued to evolve rapidly in its ability to produce durable and heat-resistant parts. SLS doesn't require support structures, making it capable of producing geometries that no other technology can. Common SLS applications include housings, machinery components, functionality testing, jigs and fixtures, ducting, customized consumer goods, mechanical joints, snap fits, and living hinges.
MultiJet technology is an inkjet printing process that deposits either photo-curable plastic resin or casting wax layer by layer to build a part. MultiJet printers output parts with high accuracy and resolution. They are used for applications such as design validation, aesthetic assessment, performance and assembly testing, manufacturability testing, rapid tooling, and jigs and fixtures manufacturing.
ColorJet printing outputs full-color parts with complex geometries at high speeds. The realistic color range makes this technology appropriate for aesthetic and ergonomic evaluation of new designs and authentic-looking demonstration models for trade shows and sales presentations.
Direct metal printing (DMP) delivers high accuracy, precision and design freedom for handling complex free-form surfaces, lattice structures, conformal channels, and thin walls. DMP is used for end-use replacement parts, producing lighter-weight parts, reducing the number of parts within an assembly, increasing part performance, and strengthening parts and assemblies.
Fused deposition modeling (FDM) is commonly used for high-strength ABS-like parts and prototypes. It allows parts to go directly from 3D CAD to thermoplastic materials without tooling. Applications include design validation, fit and function testing, small production runs, and end-use jigs and fixtures.
The cast urethane process enables production of parts that mimic the appearance and physical properties of injection-molded parts. A 3D-printed master pattern is imprinted into SRM (silicone rubber mold) tooling, enabling manufacturers to deliver cast urethane parts within days. An additional benefit of the process is the ability to over-mold existing parts or hardware with a second material.
What to Look for in a Provider
Besides solid references and the experience one always seeks in an outsourced service provider, there are several key factors one should look for in a 3D printing partner, including:
- Ease of engagement — Free online quotes, fast turnaround and shipping times.
- Level of expertise — Proven success in all key areas of design, manufacturing and technologies related to the project.
- International footprint with localized service — Localized service to accommodate different cultures, languages and working conditions and a worldwide presence to ensure the greatest number of available resources and expertise.
- Diversity of approaches and capabilities -— Having the machines, expertise, materials and resources to meet a client's exact needs.
- Creative solutions — The ability to explore a solution that the customer might not have considered, but that delivers a breakthrough in design, performance and/or affordability.
Anticipating the Future
According to a report by the research firm Markets and Markets, the 3D printing market is expected to be worth $32.78 billion by 2023, with a compound annual growth rate (CAGR) of nearly 26 percent between now and 2023.
The report attributes this growth to factors such as the ease of development of customized products, ability to reduce overall manufacturing costs, and government investments in 3D printing projects for the development and deployment of the technology.
Additional factors specific to outsourced 3D printing include:
- Worldwide competitive pressures for faster time to market.
- The need for greater innovation to develop new products aligned to lifestyle and societal changes.
- Cost savings through the ability to verify design changes early in product development.
- The ability to implement just-in-time prototyping and low-volume manufacturing that speeds production and reduces warehousing and shipping costs.
- Increasing movement towards production agility and nimbleness vs. traditionally large, slow-moving operations.
All the indicators point to a bright future for those adopting 3D printing, either in-house, through outsourcing partnerships, or as a combination of both.
Ziad Abou is vice president and general manager of the 3D Systems On Demand division. | <urn:uuid:935b1511-7b3d-47cd-b39a-efd7a188c2be> | CC-MAIN-2022-40 | https://www.mbtmag.com/home/blog/13248505/outsourced-prototyping-and-manufacturing-help-satisfy-modern-market-expectations | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00790.warc.gz | en | 0.926483 | 1,211 | 2.609375 | 3 |
Using Variables to Create Timestamps for Your Files
Timestamps are useful tools to use in your automation tasks to track when particular events occur.
The following system variables are available for you to use to create timestamps for your files:
You can combine any of these variables to design your own timestamp formats.
To design date/time timestamps, following these steps:
- Launch the Variable Manager.
- Click on "Show System Variables".
- Select the Date/Time variable and use a combination of variables.
Using the Date Variable with File Names
If you use the Date variable to append a date to file names, you cannot use a slash (/) in your date format. Microsoft Windows prohibits the use of the slash character in file names.
- To create a timestamp for Year, Month, and Day, that is appended to the end of
string 'ABC', type:
This example might look like this: ABC2013521
- To create a complete timestamp for the previous example with the current date and
This example might look like this: ABC201352107:03:03
- For clarity, you can insert text characters between the variables.
For example, to display ABC2013-5-21, type: | <urn:uuid:111fca12-2fad-4417-9bc0-e8541a49e739> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/zh-TW/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-variables/using-variables-to-create-timestamps-for-your-files.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00790.warc.gz | en | 0.723399 | 327 | 2.9375 | 3 |
Have you ever traveled to a place where you didn't speak the local language and attempted to ask for help? Or have you attempted to say something in another dialect but it came out meaning something entirely different?
Language is the key to communication and a critical component in effective public-private information sharing in the cyber domain. Unfortunately — although some international organizations have attempted to document them — there are no common definitions for cyber terms globally across government, business, and academia. When you throw in industry buzzwords and marketing jargon around cybersecurity, it can become nearly impossible for organizations to speak quickly and efficiently with each other about security.
To fully engage in cross-industry dialogue within the context of cybersecurity, we must speak the same language. We can't outmaneuver threats without it.
Defining the Term "Cyber Attack"
There are at least 16 different definitions of the term "cyber attack" globally, all of which span a fairly large spectrum. Most of them, at least mention something about denying, disrupting, destroying, or degrading information systems. Using this premise, Sony, Ukrenergo, Dyn, and Saudi Aramco experienced cyber attacks. However, the events that took place at OPM, Target, and Banner Health were not—although they were reported as such. So, what do we call what happened? A host of other security-related terms might be applied, including data exfiltration, privacy breach, data breach, intrusion, cyber incident, and cyber compromise. In some cases, it may be a combination of several of these.
But even these terms have a variety of definitions. In the newly released Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, authored by 19 international law experts, an example of this is cited on page 418: "…the Experts noted general agreement that cyber operations that merely cause inconvenience or irritation to the civilian population do not rise to the level of attack, although they cautioned that the scope of the term 'inconvenience' is unsettled."
So why does this all matter? When there are multiple media that all label an event with terms of varying definition, we negatively affect our ability as security professionals to characterize and respond appropriately. As an industry, the lack of defined terms is helping the hackers win. Mismatched terminology can introduce unnecessary fear, uncertainty, and doubt, and affect the potential for government authorities to assist a breach victim, alter the public's perception of the situation, or cause adversaries to push forward to achieve their objectives.
The Quest for Standard Terminology
There is a fairly recent concept that warrants particular attention to ensure government, industry, and academia are speaking the same language, especially in light of the global movement toward a more proactive security posture: active defense.
Active defense is a term that captures a spectrum of proactive cybersecurity measures that fall between traditional passive defense and offense, according to the George Washington University Center for Cyber & Homeland Security. There is a plethora of detail on this concept in a recent GWU report, but at its essence, active defense identifies a list of 11 techniques that private entities can employ to interdict cyber exploitations and attacks in a "gray zone." This zone falls between passive defense, which typically features basic internal security controls, and offensive cyber, which features more proactive activities security organizations can undertake, such as "hacking back."
These gray-zone active defense techniques range from information-sharing to denial and deception to botnet takedowns and rescue missions for recovering assets (the latter requiring close government cooperation). At the heart of this concept is the ability for the public and private sectors to partner on the planning and execution of these techniques.
Advancing toward a more universal spoken and written language in cyber will take time. But there are positive developments taking place. Some helpful concepts are gaining adoption and helping security professionals define their activities for their colleagues, industry peers, partners, and customers. The concepts below fall within the low-risk spectrum of active defense and can be executed given a shared technical language (e.g., shared semantic models):
- Active Response: According to SANS, active response is a mechanism that provides the intrusion-detection systems with the capability to respond to an attack when it has been detected.
- Adaptive Response: Adaptive response describes enablement of end-to-end context and automated response across multivendor environments. Because most security technologies aren't designed to work with each other, using frameworks like adaptive response gives vendors to the ability to detect threats faster through analytics, and collaborate on a response. This defense strategy for multilayered, heterogeneous security architectures enables faster decision-making and more cohesive responses to threats.
- Adaptive Security: Adaptive security is the ability to adapt and respond to a rapidly changing threat landscape by recognizing behavior rather than root files or code. The rise of technologies focusing on user and behavior analytics are a good example of adaptive security in action.
Though that's a short list, these terms represent a step in the right direction for the industry. But we have a long way to go. Without a common language in cybersecurity, we can't achieve intelligent information-sharing both within a single organization or between the complex web of vendors and solutions in today's market. Lack of defined key terms is blocking the industry from effectively implementing anything beyond passive defensive mechanisms.
We must continue to strive toward establishing a common global cybersecurity language that spans government, industry, and academia: this is our center of gravity. Until we make progress, this is a deficiency that will remain a vulnerability that our common adversaries exploit to outpace and outmaneuver us.
- We Need A New Word For Cyber
- Law Enforcement At RSAC: Collaboration Is Key To Online Crime Fighting
- 7 Ways To Fine-Tune Your Threat Intelligence Model | <urn:uuid:fa67b61c-d551-4b7e-b6c6-0a8d735ff8ad> | CC-MAIN-2022-40 | https://www.darkreading.com/threat-intelligence/in-cybersecurity-language-is-a-source-of-misunderstandings | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00790.warc.gz | en | 0.936677 | 1,185 | 3 | 3 |
Making predictions about how technology will develop and affect our lives in the future is something that can be fraught with risk. When we see various experts tell us what they think is going to happen in the future, it is awe-inspiring and fascinating when we see their predictions come true. On the flip-side, the incorrect ones are often cringe-inducing (and both are immortalized on the internet for future generations to applaud or jeer at forever).
One of the most famous (or infamous) of those prognostications comes from a Newsweek article in 1995, where Dr. Clifford Stoll, astronomer and computer expert, attempted to forecast the direction of the internet. The article was essentially a prediction that the internet would never amount to much. And while some of Stoll’s predictions were pretty accurate, he got it wrong on the whole.
For instance, when Stoll talked about e-commerce (he called it “cyberbusiness”), he asked, “So how come my local mall does more business in an afternoon than the entire internet handles in a month?” Considering the trillions of dollars that are spent on ecommerce today, that comment is pretty cringey.
I think we can all safely assume today that the internet is no fad. Today, thousands of companies use the Internet and web applications as their primary means of conducting “cyberbusiness”. The pervasive use of hybrid and public cloud infrastructure to develop and publish web applications faster and faster means that there is a continuous growth of that business on the web.
That growth also means that the potential for loss, theft or abuse is growing. Large databases are sitting behind those web applications, each containing all manner of sensitive information like credit card numbers, Social Security numbers, health information, product codes, intellectual property, etc. Cyber criminals want that data.
So how do we go about building protections against the bad people? How do we keep cyber attackers from abusing weaknesses in our web applications and infrastructure? There are 5 tips that can help you start getting ahead of cyber criminals when they come after your SQL-based apps that you have deployed on the cloud: Secure coding practices, vulnerability scanning and penetration testing, layered defense, blocking and tackling, and intelligent log analysis. For the first tip in this 5-part series, let’s start by looking at secure coding practices.
1. Secure coding practices for web applications – don’t trust the input
The first line of defense against web application security flaws like injection and cross-site scripting should always be the creation and deployment of secure code. And the first rule of secure coding is that all input going into a web application should be considered untrusted and potentially malicious. The source of the data does not matter, even if you consider that source trusted (i.e. data coming only from an internal source of some kind and not from the public internet). Secure coding techniques can then be applied to make sure that ALL data that is coming into the web application is cleaned or blocked.
Let’s look quickly at an illustration of why secure coding is so important. A cyber attacker is focusing on a customer relationship management (CRM) application. In this case, the CRM app has a simple web form that a sales person can use to input a 5-digit customer ID to look up their latest purchases. The cyber attacker finds the form and starts inputting various SQL commands to see if the form field is vulnerable to the much-dreaded (and still very common) SQL injection (SQLi) attack. At this point, there are two possibilities: 1) the developer wrote the web application in such a way that data plugged into that form is validated and thus hardened against SQLi; or 2) the form field allows pretty much any input, and the cyber criminal can send SQL commands from the web application directly into the database.
If the second of these two scenarios is true, the cyber attacker is taking advantage of code that essentially trusts all input and does not attempt to validate that input. Commands sent by the cyber criminal can be interpreted by the database to perform actions like downloading a table from the database, or maybe even the entire database. Or, if the cyber attacker is having a bad day and just feels like wrecking your day, commands can be issued to delete or overwrite the data altogether. Either way, if your business is using a vulnerable form like this, someone will soon be explaining to your customers how their data got leaked or destroyed. That will not be good for business.
The good news is that the first scenario of a hardened and secure web application is very achievable. Web application attacks like SQLi and others are very well documented. Methods of mitigation (i.e. sanitization, parameterization, whitelisting, and others) are easy to learn and can be found all over the web.
One such source for learning about how to code more securely is the Open Web Application Security Project (OWASP). Their Top 10 list of the most critical web application security risks is the de facto web application security standard for companies and regulating bodies around the world. Each of the Top 10 entries contains various methods for blocking attacks.
If you want to know more about specifically avoiding SQLi from the example above, you should take a look at the OWASP SQL Injection Prevention Cheat Sheet. Also, if you really want to dig into SQLi, I suggest reading this article. All of these resources will get you started in your research on how to code more securely.
This post was a collaborative effort with Joe Hitchcock. | <urn:uuid:cf9670a6-342f-400e-8d7a-25130a72f426> | CC-MAIN-2022-40 | https://www.alertlogic.com/blog/5-tips-for-protecting-your-sql-based-cloud-deployed-web-applications-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00790.warc.gz | en | 0.939152 | 1,144 | 2.515625 | 3 |
Creator: Georgia Institute of Technology
Category: Software > Computer Software > Educational Software
Topic: Electrical Engineering, Physical Science and Engineering
Tag: applications, sensors, source
Availability: In stock
Price: USD 49.00
This course explains how to analyze circuits that have direct current (DC) current or voltage sources. A DC source is one that is constant.
Circuits with resistors, capacitors, and inductors are covered, both analytically and experimentally. Some practical applications in sensors are demonstrated. | <urn:uuid:c4c6b08b-0721-4e71-b5b6-6f9c070d00b1> | CC-MAIN-2022-40 | https://datafloq.com/course/linear-circuits-1-dc-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00790.warc.gz | en | 0.856425 | 108 | 3.125 | 3 |
Macs may be heralded as more secure than their competitors, but they too can be hacked. Malicious programs that appear like harmless files or apps can infect your computer and cause it to slow down to a crawl. If this ever happens, you must be able to remove the malware quickly before the infection gets worse.
Minimize the threat
Containment is the top priority when your Mac is hit with malware. The first thing you should do is to disconnect your device from the internet to prevent malware from spreading and making it vulnerable to hackers.
Then, reboot your computer in safe mode (hold the Shift key when the Apple logo appears) so you can run only mission-critical programs while you resolve the issue. Since most files aren’t active in safe mode, malicious programs are easier to remove. You should also open Activity Monitor, which can be found in the Utilities folder, to disable any non-essential apps.
Even if you are in safe mode, assume that your activities are being monitored by cybercriminals. Log out of any applications and don’t type any passwords in case a hidden keylogger or spyware program is running in the background to steal sensitive information.
Scan and remove
Once you’ve contained the threat, update your anti-malware program and run a full system scan. This ensures your security software has the latest malware definitions to identify and remove the most recent threats. But if there are suspicious applications your anti-malware program failed to detect, you can always uninstall them manually like how you would with other software. Contact an IT expert if you’re unsure whether a program is safe to remove.
Malware may also add unwanted toolbars, display persistent ads, and redirect you to a dangerous homepage. In such cases, you’ll need to clean your browser. For Safari, click Safari from the menu bar, go to Preferences, and remove unwanted plug-ins and extensions. On Chrome, click Chrome from the menu bar and select Preferences > Advanced > Restore settings to their original defaults.
Consider a factory reset
If none of the methods above worked, the only way to clear the malware infection is to factory reset your operating system. To do this, hold down Command+R to enter Recovery Mode, wipe the hard drive with Disk Utility, and reinstall macOS. Keep in mind that this will wipe apps and files stored on your device, so back up everything beforehand.
Fortify your defenses
Your computer won’t stay malware-free for long unless you’re proactively protecting it. Make sure to update your applications and security software regularly to defend against the latest malware strains and cyberattacks. You should also implement preventive measures like advanced threat prevention systems and firewalls, which stop malware in its tracks.
Backups are another important solution for dealing with malware. An easy way to start using this technology is to enable Apple’s built-in backup solution, Time Machine, and store everything in an external drive. The feature can make hourly, daily, and weekly backups, letting you restore your computer to the point before it was infected with malware.
If any of this sounds too technical for you, don’t worry. Our cybersecurity specialists can help you remove malware without any complications. We’ll even throw in proactive security monitoring services to ensure your Mac steers clear of cyberthreats. Call us now. | <urn:uuid:d458df53-8c12-461b-95b1-0d68f0a7b814> | CC-MAIN-2022-40 | https://www.alcalaconsulting.com/2019/07/25/what-to-do-when-your-mac-is-hit-with-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00790.warc.gz | en | 0.898912 | 696 | 2.578125 | 3 |
The Defense Advanced Research Projects Agency (DARPA) was founded in 1958 to protect American citizens by maintaining national security. By developing new technologies which are superior to potential adversaries in the military and preventing strategic surprise, they can successfully achieve their mission. Groundbreaking research provided technology which allows the government to find terrorists without invading the privacy of their citizens.
Traditionally, dogs have been used to compliment security practices with their superior sense of smell. However, recent technology will likely render excessive security dogs obsolete. Instead of having to smell each individual person a machine will be able to catch the scent of explosives and other dangerous chemicals. Sensors can be placed at door frames to catch the scent of a wisp of hair or piece of clothing as people pass through the entrance.This technology is unobtrusive since people who are not a threat can pass without any problems. Instead of suspecting every since person as guilty until proven innocent with high tech machines and other devices, this technology will allow civilians to go about their day without issues. This technology would be priceless for places like airports which have to use expensive equipment that is hard to maintain. Of course, it also has other limitless applications.
The proteins responsible for our sense of smell are called olfactory receptors. Since the proteins lose their structure quickly when exposed to water, scientists have a hard time synthesizing them in the lab. It is even more difficult to produce them in quantities which are large enough to be useful. Several years have been spent purifying and segregating receptor proteins using materials such as detergent and wheat germ. Over the years, research has made it easier to grow enough proteins to use in industries and further research.
Many Americans feel being innocent until proven guilty is a right they are losing as citizens. Therefore, it is important that the technologies developed do not infringe upon anyone's rights while keeping everyone safe. This technology is groundbreaking because it allows people to maintain their rights and be protected at the same time. A minority of people also believe that animals should be used sparingly, or not at all, with regards to human safety. They feel as though animals should not be put in any risky situations. This may eliminate the need for animals like dogs to sniff out explosives and other substances completely. It could also save the US government millions of dollars after this security feature has been implemented. Not only will the government have to pay to reconstruct places if they get destroyed by acts of terror, but they will save money on training for animals and other security workers. Overall, it is a small but effective solution for a big problem.
Americans want to feel safe and protected, but do not want their basic right to privacy infringed upon. Instead of expensive, intrusive machinery that is hard to maintain, recent developments in technology make it easier for us to sniff out potential threats. This solves many ethical problems like spying on people with x-rays or pat down searches whenever they need to board a plane. While effective, such practices are controversial to many people. Therefore, we are finding solutions for security while maintaining privacy for the innocent. | <urn:uuid:14673ee9-4a06-4f83-86ed-694a51ed8387> | CC-MAIN-2022-40 | https://www.getscw.com/blog/2000/now-thats-innovative-sense-of-smell-technology-being-used-in-surveillance-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00790.warc.gz | en | 0.965965 | 621 | 3.171875 | 3 |
Conrad Wolfram, the man behind knowledge engine Wolfram:Alpha, has announced a new document format which he claims offers an expanded medium for explaining technical, scientific, or otherwise complex concepts.
Dubbed CDF - Computable Document Format - the new standard aims to bring the kind of computation Wolfram is known for to portable documents that can be used within a browser, on a desktop and on hand-held devices.
"CDF binds together and refines lots of technologies and ideas from our last 20-plus years into a single standard," Wolfram explains in a blog post (opens in new tab) on his company's site. "Knowledge apps, symbolic documents, automation layering, and democratised computation, to name a few.
"Disparate though these might appear, they come together in one coherent aim for CDF: connecting authors and readers much better than ever before."
The core concept of CDF is to allow content creators a 'knowledge container' format that is as easy to author as a standard document, but features the same level of interactivity as an application. "The idea," claims Wolfram, "is for CDFs to make live interactivity as everyday a way to communicate as spreadsheets made charts."
The free CDF Player - available as a desktop application for Windows, Mac, and Linux, or as a browser plugin for selected platforms - demonstrates Wolfram's intention well: while a copy of his blog post looks standard at first, an embedded CDF object allows users to experiment with the concepts behind the Doppler Effect in a simple, straightforward manner.
By adding truly interactive content to documents, Wolfram believes that complex concepts can be far better communicated. "Static documents take their share of the blame in making us 'information rich, but understanding poor,' to repurpose the common saying," he said. "For too long, authors have had to aggressively compress their ideas to fit down the narrow communication pipe of static documents, only for readers at the other end to try to uncompress, reconstruct, and guess at the original landscape of information.
"With CDFs we’re broadening this communication pipe with computation-powered interactivity, expanding the document medium's richness a good deal," claims Wolfram. "We're also improving what I call the 'density of information' too: the ability to pack understandable information into a small space - particularly important on small screen devices like smartphones."
Wolfram argues that CDF extends beyond existing technologies for introducing interactivity into documents, such as Adobe's Flash and the various capabilities of HTML5. "With a computation-powered knowledge container," he said, "you don't need to pre-compute and pre-generate - you can leave that to runtime, your authoring can be at a much higher level than for example in Flash or HTML5."
It's an approach which has already won some major support in the form of contributions to the Wolfram Demonstrations Project (opens in new tab), a collection of over 7,000 'knowledge apps' created and submitted by content creators who aren't professional programmers. "Unless content originators - that is, teachers, journalists, analysts, managers, academics, and so on - make the content interactive themselves, it won't be interactive," Wolfram explains. "It's simply too expensive and too difficult if professional programmers are involved."
Wolfram's mention of expense does skim over one important consideration for the future success of CDF as a mainstream 'knowledge container' standard, however. The CDF format is, currently, closed, with the only compatible software created by Wolfram and his team. While the Player is free to download, creation requires the purchase of a package called Mathematica.
Created by Wolfram in 1988, Mathematica - currently on its eighth release - is an impressively powerful computation engine that allows users to calculate and visualise data in a convenient manner with a minimum of programming knowledge. It's popular, but it's also expensive: a single-user commercial licence for up to four processing cores will set you back £2,050. Non-commercial users can opt for the 'Home' release, but even that costs £195.
Wolfram's goal is laudable, but the approach is questionable: until there is a way to cheaply create - rather than merely consume - CDF content, it looks set to remain a niche format. | <urn:uuid:2bbd6d7d-381b-42e0-a42a-8713cb2dfc79> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/07/21/wolfram-punts-expanded-medium-technical-docs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00790.warc.gz | en | 0.938299 | 904 | 2.890625 | 3 |
Used to build everything from a planetarium in post-WWI Germany to mobile yoga studios at outdoor festivals today, the geodesic dome has proven to be a lasting concept for highly stable structures of any size. Structural stability is a valued goal in data center design, but the idea of building a data center shell using a spherical skeleton that consist of great circles intersecting to form a series of triangles – the most stable shape we know of – is novel.
That is the approach Perry Gliessman took in designing the recently completed Oregon Health and Science University data center. Gliessman, director of technology and advanced computing for OHSU’s IT Group, said structural integrity of a geodesic dome was only one of the considerations that figured in the decision. It was “driven by a number of requirements, not the least of which is airflow,” he said.
One of the new data center’s jobs is to house high-performance computing gear, which requires an average power density of 25 kW per rack. For comparison’s sake, consider that an average enterprise or colocation data center rack takes less than 5 kW, according to a recent Uptime Institute survey.
Needless to say, Gliessman did not have an average data center design problem on his hands. He needed to design something that would support extreme power densities, but he also wanted to have economy of space, while using as much free cooling as he could get, which meant maximizing outside-air intake and exhaust surface area. A dome structure, he realized, would tick all the boxes he needed to tick.
No chillers, no CRAHs, no problem
The data center he designed came online in July. The resulting $22-million facility has air-intake louvers almost all the way around the circumference. Gigantic fan walls suck outside air into what is essentially one big cold aisle, although it is really lots of aisles, rooms and corridors that are interconnected. Inside the dome, there are 10 IT pods. The pods are lined up in a radial array around a central core, which contains a network distribution hub sitting in its own pod. This placement ensures equal distance for air to travel through IT gear in every pod and equal and shortest distance to stretch cables over to the network hub.
Each pod’s server air intake side faces the space filled with cold air. The exhaust side is isolated from the space surrounding it but has no ceiling, allowing hot air to escape up, into a round plenum above. Once in the plenum, it can either escape through louvers in the cupola at the very top of the dome or get recirculated back into the building.
There are no air ducts, no chillers, no raised floors or computer-room air handlers. Cold air gets pushed through the servers partially by server fans and partially because of a slight pressure differential between the cold and hot aisles. It goes into the plenum because of the natural buoyancy of warm air.
When outside air temperature is too warm for free cooling, the data center’s adiabatic cooling system kicks in automatically to help out. Beaverton, Oregon (where the facility is located), experienced some 100 F days recently, and the evaporative-cooling system cycled for about 10 minutes at a time at 30-minute intervals, which was more than enough to keep supply-air temperature within ASHRAE’s current limits. Gleissman said he expects the adiabatic cooling system to kick in several weeks a year.
In the opposite situation, when outside air temperature is too cold, the system takes hot air from the plenum, mixes it with just enough cold air to bring it down to the necessary temperature and pushes it into the cold aisle.
The army of fans that pull outside air into the facility have variable frequency drives and adjust speed automatically, based on air pressure in the room. When server workload increases, server fans start spinning faster, sucking more air out of the cold aisle, causing a slight drop in pressure, which the fan walls along the circumference are programmed to compensate for. “That gives me a very responsive system, and it means that my fans are brought online only if they’re needed,” Gliessman said.
Legacy IT and HPC gear sharing space
That system can cool 3.8 megawatts of IT load, which is what the data center is designed to support at full capacity. There is space for additional pods and electrical gear. Each pod is 30 feet long and 4 feet deep. The pods have unusually tall racks – 52 rack units instead of the typical 42 rack units – and there is enough room to accommodate 166 racks.
Since OHSU does education and research while also providing healthcare services, the data center is mission-critical, supporting both HPC systems as well as hospital and university IT gear. Gleissman designed it to support a variety of equipment at various power densities. “I have a lot of legacy equipment,” he said. All infrastructure components in the facility are redundant, and the only thing that puts it below Uptime Institute’s Tier IV standard is lack of multiple electricity providers, he said.
It works in tandem with the university’s older data center in downtown Portland, and some mission-critical systems in the facility run in active-active configuration with systems in the second data center.
Challenging the concrete-box dogma
Because the design is so unusual, it took a lot of back-and-forth with vendors that supplied equipment for the project and contractors that built the facility. “Most people have embedded concepts about data center design and, like all of us folks, are fairly religious about those,” Gleissman said. Working with vendors was challenging, but Gleissman had done his homework (including CFD modeling) and had the numbers to convince people that his design would work.
He has been involved in two data center projects in the past, and his professional and educational background includes everything from electronics, IT and engineering to biophysics. He does not have extensive data center experience, but, as it often happens, to think outside the box, you have to actually be outside of it.
Take a virtual tour of OHSU’s “Data Dome” on the university’s website. They have also posted a time-lapse video of the data center’s construction, from start to finish. | <urn:uuid:898565cf-d0a1-49ce-ba95-3f1d98f03e8a> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2014/08/18/geodesic-dome-data-center-design-oregon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00190.warc.gz | en | 0.960388 | 1,346 | 2.703125 | 3 |
Passwords have long been known to be insecure, with malicious actors frequently bypassing password-based security protocols. However, security concerns are not the only drawback of password-based authentication. They can be costly and burdensome to manage, with users frequently forgetting and thus having to reset passwords, which consequently creates a poor UX.
A Promising Alternative
These problems highlight the need for organisations to embrace passwordless authentication. Passwordless authentication can eliminate these problems; enhancing security and providing a better UX.
Minimising reliance on passwords, or eliminating them altogether, diminishes their value to bad actors. By replacing them with more secure forms of authentication, such as biometrics, it becomes far more difficult and expensive for bad actors to gain unauthorised access. When combined with other security mechanisms, such as behavioural biometrics and risk-based MFA, logins become even more secure.
Benefits for Usability
Opting for passwordless authentication can improve the UX by removing the friction commonly associated with password-based authentication. Passwords can be easily forgotten; requiring users to go through the hassle of resetting them. Additionally, it is often the case that the more secure a password is, the more frustrating it is for it to be manually entered. These sources of friction often result in poor UX.
By implementing passwordless authentication, the need to create, manage, and remember (or reset) passwords is eliminated. Users can instead enjoy a more seamless experience using convenient login mechanisms, such as facial recognition to verify their identity; enhancing the UX.
Immediate and Long-term Prospects
While true passwordless authentication remains a long way off from widespread adoption, there has been a surge in creating passwordless experiences in which passwords are simply masked, such as biometrics being used to unlock a password. Although this middle-ground solution retains the vulnerability to credential-based attacks, it does offer consumers the improved UX associated with true passwordless authentication.
This is representative of the steady transition to a passwordless future, which while a number of years away, signals the ever-growing importance of biometrics. An ecosystem of authentication is most likely the evolutionary route to develop a secure but usable mobile payments ecosystem, with rules of use being determined by the risk level of the use case. The goal is towards a seamless, secure payments system, that works across omni-mobile channels and that meets increasingly stringent regulations.
- Overview of Biometric Technologies
- Challenges in Biometric Authentication
- Market Forecast Summary
► Mobile Payment Biometrics Market Research
Our latest research found:
- Total value of biometrically authenticated remote mobile payments will reach $1.2 trillion globally by 2027; rising from $332 billion in 2022.
- Total volume of biometrically authenticated remote mobile payments will grow by 383% over the next 5 years, reaching 39.5 billion globally by 2027.
- To maintain trust and reduce fraud, financial institutions are implementing step-up authentication, where certain transactions are escalated for biometric approval based on risk scoring. Therefore, vendors must offer multiple ways to authenticate, as well as developing new techniques to keep biometrics secure.
- Mobile authentication vendors must prioritise the design and implementation of enhanced liveness detection, and anti-spoofing techniques, to combat the ever-evolving role of fraudulent players and ensure that the security of facial recognition solutions is not compromised. | <urn:uuid:28a32d1a-bde9-4f4b-9494-4d8601dacf5b> | CC-MAIN-2022-40 | https://www.juniperresearch.com/blog/june-2022/biometric-authentication-passwordless-future | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00190.warc.gz | en | 0.932826 | 700 | 2.640625 | 3 |
Object recognition is a subfield of computer vision, artificial intelligence, and machine learning that seeks to recognize and identify the most prominent objects (i.e., people or things) in a digital image or video with AI models. Image recognition is also a subfield of AI and computer vision that seeks to recognize the high level contents of an image.
Image recognition is one of the most popular and recognizable applications of artificial intelligence, machine learning, and computer vision. And now, highly accurate, real-time image recognition is available for developers via image recognition APIs. Some platforms offer APIs that can help organizations add image and video analysis capabilities, but what’s needed is a easy-to-implement image recognition API that offers powerful computer vision services – like Chooch AI.
Image recognition is one of the most advanced, and most widely used, applications of artificial intelligence. In this article, we’ll discuss the difference between an image recognition SDK and an image recognition API, and how the choice is clear when considering computer vision platforms.
Computer vision and artificial intelligence have been used for years in the media and entertainment industries—but thanks to recent advances in deep learning, the potential applications of image recognition are broader than ever before. What’s more, these technological developments coincide with an explosion in the use of images and videos, especially for marketing and advertising purposes. It’s estimated that 1 trillion photos were taken in 2015 alone, and posts with visual content receive 94 percent more visits and engagements than text-only posts.
Over the past decade, computers using deep neural networks have been able to approach—if not exceed—human performance at object recognition tasks. In 2015, the PReLU-Net deep network became the first computer model to surpass human accuracy on the ImageNet 2012 dataset, with 4.94 percent error compared with humans’ 5.1 percent. | <urn:uuid:7f4527cb-076f-429b-bc93-663272095675> | CC-MAIN-2022-40 | https://chooch.ai/ai/image-recognition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00190.warc.gz | en | 0.924285 | 382 | 2.953125 | 3 |
Written by: Jay H.
According to the notification, threat actors are turning their attention to the food and agriculture industry because of its increasing reliance on smart tech. These attacks are disrupting operations, causing financial loss, and negatively impacting the food supply chain. As previous events have demonstrated, ransomware incidents can devastate more than just the targeted business, like the Colonial Pipeline attack that distributed operations in the US!
These ransomware attacks can impact businesses all across the sector, from small farms to large producers, processors and manufacturers, and markets and restaurants.
Attackers Exploiting Vulnerabilities
Attackers are taking advantage of network vulnerabilities to steal data and encrypt systems. Since most agribusinesses depend on smart technologies, industrial control systems, and internet-based automation systems, there is a large attack surface.
Victim businesses experience a significant financial loss resulting from ransom payments, loss of productivity, and remediation costs. Not only that, but many also suffer the loss of proprietary information and personally identifiable information and may endure reputational damage.
“Larger businesses are targeted based on their perceived ability to pay higher ransom demands, while smaller entities may be seen as soft targets, particularly those in the earlier stages of digitizing their processes,” said the FBI.
Ransomware Impact Growing
From 2019 to 2020, the average ransom payment doubled and the average cyber insurance payout increased 65 percent. Separate studies have also shown that between 50-80 percent of victims that paid the ransom experienced another ransomware attack, either by the same or different actors. The most common methods cybercriminals used to distribute ransomware are email phishing campaigns, Remote Desktop Protocols (RDP) vulnerabilities, and software vulnerabilities. This highlights the importance of cybersecurity awareness training and also software patch management. Without them, your business is at extreme risk of a disastrous cyber attack!
Ransomware actors will continue to target businesses in the food and agriculture industry and elsewhere. To mitigate your organization’s risk, the FBI provided the following recommendations:
- Regularly back up data, air gap, and password-protect backup copies offline. Ensure copies of
critical data are not accessible for modification or deletion from the system where the data
- Implement network segmentation.
- Implement a recovery plan to maintain and retain multiple copies of sensitive or proprietary data
and servers in a physically separate, segmented, secure location (i.e., hard drive, storage device,
- Install updates/patch operating systems, software, and firmware as soon as they are released.
- Use multifactor authentication with strong passphrases where possible.
- Use strong passwords and regularly change passwords to network systems and accounts, implementing the shortest acceptable timeframe for password changes. Avoid reusing passwords for multiple accounts.
- Disable unused remote access/RDP ports and monitor remote access/RDP logs.
- Require administrator credentials to install software.
- Audit user accounts with administrative privileges and configures access controls with least privilege in mind.
- Install and regularly update anti-virus and anti-malware software on all hosts.
- Only use secure networks and avoid using public Wi-Fi networks. Consider installing and using a VPN.
- Consider adding an email banner to messages coming from outside your organizations.
- Disable hyperlinks in received emails.
- Focus on cyber security awareness and training. Regularly provide users with training on information security principles and techniques as well as overall emerging cybersecurity risks and
vulnerabilities (i.e. ransomware and phishing scams).
Reduce Your Organization’s Risk
Cybercriminals are constantly looking for new organizations and vulnerabilities to exploit, and combating them is exhausting and difficult. That’s why many organizations depend on an IT support managed service provider like Design2Web IT to protect them from cybercriminals and keep them safe. Contact us today to learn more about our cybersecurity services.
Comments are closed. | <urn:uuid:01f48451-17be-4892-bcc1-6d93cb3ce83b> | CC-MAIN-2022-40 | https://design2web.ca/blog/fbi-ransomware-targeting-food-agriculture-sector/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00190.warc.gz | en | 0.912276 | 818 | 2.546875 | 3 |
Some antitrust advocates, such as Lina Khan, are concerned that the goals of antitrust policy are too narrow. But the future goal of antitrust policy should be what it has been for generations, namely, to make the market work better for consumers.
The original goal of the antitrust movement was to curb the economic and political power of large corporations, a new organizational form that emerged in the late 19th and early 20th centuries and was proving to be an extraordinarily efficient institutional innovation. Small retailers, manufacturers and farmers wanted protection from these developments. Antitrust policy was their weapon.
But in the 1930s President Franklin Roosevelt’s antitrust chief, Thurmond Arnold, changed the course of antitrust enforcement, establishing the goal of consumer welfare, not the protection of small companies, as the paramount objective. Carl Shapiro recently restated this goal as: “protecting the competitive process so consumers receive the full benefits of vigorous competition.” Through the ups and downs of enforcement styles in the ensuing period, this objective has been the constant lodestar of antitrust.
Recently, the Administration’s nominee to head the Federal Trade Commission, which is one of the two Federal agencies tasked with enforcing the antitrust laws, endorsed this consumer welfare standard, saying without qualification: “The FTC is all about protecting and improving consumer welfare.”
There is a real need for vigorous antitrust policy
The marketplace might not automatically deliver an abundant supply of low-price, high-quality products and services. Antitrust policy should seek to maintain and foster competition so as to lower price, improve quality and increase the output of products and services. Conversely, it should avoid measures that harm consumers by denying them services or features that they value.
Here’s an example of what could go wrong with competition policy. Concerned with the size of some tech companies, an antitrust authority bans them from copying features that have been developed by competitors. If you think this is fanciful, think about the outcry when Facebook added an improved version of Snap’s disappearing-photos feature to its own Instagram Stories service.
Despite the criticism, consumers benefited from this diffusion and improvement of a valuable innovation. A breakthrough innovation, after all, wants to be copied and improved upon. As Greg Ip says, “There’s nothing wrong with copying, especially if the copy is better than the original.” To prevent Instagram from adding similar features to its competitors is to deny Instagram’s users innovation, and to require Instagram to remain stagnant.
To be sure, things can go wrong with innovation copying. A generation ago, Microsoft was under fire for bundling its browser, Internet Explorer, with its operating system in an attempt to exclude the original innovator, Netscape’s Navigator, from the browser market. But here the competitive harm was the exclusionary tie, not innovation copying. By its second version, Internet Explorer was high-quality software, as good as or better than the Netscape original.
Feature copying is good and generally benefits consumers
Feature copying (within the constraints of intellectual property law, of course) is a good thing that generally benefits consumers. A ban on that practice would be antitrust policy gone awry, taking away valuable products and choices from consumers.
Many commentators are concerned about broader public policy issues and want to enlist antitrust as a policy lever to advance reforms in these areas. Among these goals are better wages for workers, greater equality in the distribution of income and wealth and constraints on the ability of large organizations to influence the outcomes of public policy debates.
These issues are important, perhaps more important than promoting competition, because they go to the question of the strength and legitimacy of our democratic political processes. But they should not be addressed by antitrust authorities and courts.
As Carl Shapiro says “the corrupting power of money in politics…is far better addressed through campaign finance reform and anti-corruption rules than by antitrust.” As for income inequality, “other public policies are far superior for this purpose. Tax policy, government programs such as Medicaid, disability insurance, and Social Security, and a whole range of policies relating to education and training spring immediately to mind.”
Moreover, as Herbert Hovenkamp says, the larger goals that antitrust might foster “often operate at cross purposes with one another. For example, to the extent that large firms are more efficient, their output will be higher and they will provide more jobs. Further, large firms historically pay substantially higher wages and salaries than smaller firms.” Do we really want to break up large firms if the result is lower wages for workers?
Vigorous antitrust enforcement should target price increases and declines in the quality and output of goods and services created by failures of the competitive process. Companies should not be allowed to take advantage of their market position to harm the consumer interest in low-price, high-quality goods and services. Antitrust officials should keep their eye on the consumer welfare ball, rather than trying to remedy real problems that are outside the scope of their knowledge and expertise. | <urn:uuid:a3dfe878-8a26-4749-899b-306f932fa803> | CC-MAIN-2022-40 | https://www.cio.com/article/228483/what-is-competition-policy-for.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00190.warc.gz | en | 0.949275 | 1,031 | 2.640625 | 3 |
There are endless security risks in the Tech world of today: cars are being hacked, IoT is used in DDoS attacks and ransom Malwares are holding hostage computers. But perhaps one of the most concerning threat out there is the Zero-Day Attack.
A Zero-Day Attack is a type of attack that uses a previously unknown vulnerability. Because the attack is occurring before “Day 1” of the vulnerability being publicly known, it is said that the attack occurred on “Day 0” – hence the name. Zero-Day exploits are highly sought after – often bought and sold by private firms or entities anywhere from $5,000 to Millions of Dollars, depending on what applications and operating systems they target – as they almost guarantee that an attacker is able to stealthily circumvent the security measures of his or her target. Private security firms aside, software vendors will also usually offer a monetary reward among other incentives to report zero-day vulnerabilities in their own software directly to them.
1.5 Million Dollars for iPhone Vulnerabilities
Take as an example the world’s most successful smartphone – Apple’s security is far from being hack-proof. iOS 10 has already been jailbroken, although the jailbreak isn’t yet available to the masses. That means there are vulnerabilities in the code, or potential Zero-Day Attacks, that hackers can use to get access to the phone. And a company that sells such exploits has raised its bug bounty for iPhone zero-day attacks — the kind of vulnerabilities that Apple hasn’t yet discovered — to $1.5 million. Zerodium is the exploit broker that’s willing to pay $500,000 more than last year’s $1 million bounty for similar hacks. As Wired reports, the money will go to anyone who can perform a remote jailbreak of an iPhone running iOS 10.
Zero-Day DDoS – The Early Days
But What are Zero-Day DDoS attacks? Zero-Day DDoS attacks are basically the same thing, attack vectors that uses a previously unknown Denial of Service vulnerability. One of the first and certainly one of the most famous Zero-Day DDoS attacks, was the teardrop attack. In the mid-late 1990s, a very simple vulnerability in with TCP/IP in certain operation systems was discovered. Back then, DDoS attacks were a lot less common, and used for far fewer purposes. Nowadays, people use DDoS attacks for things like extortion and hacktivism, but back in those days DDoS attacks were used to annoy people that you didn’t like on the Internet. In the case of the Teardrop attack, if the recipient of the attack was running a vulnerable operating system, their system would just crash and require a reboot. The premise behind the attack was quite simple. The attacker generates packets that are fragmented with malformed offsets, causing the operating system to not know how to reassemble the packet. In Windows 3.1, 95, NT, and Linux versions prior to 2.0.32 and 2.1.63, this caused the system to simply crash.
Another extremely notorious Zero-Day DDoS attack was the Ping of Death. In the late 90s, a vulnerability in many operating systems was discovered. Basically, if the operating system had to process a packet whose size was greater than 65536 bytes it would cause issues such as buffer overruns, kernel panics, full system reboots and system hangs. The reason for this is due to the fact that the way to cause a remote crash was to send an ICMP echo request (ping) to an IP address, specifying a certain packet size. In Windows 95 and NT, it was just one simple command to crash a remote system that was vulnerable:
ping -l 65536 10.100.101.102
Will The Internet Break?
Those were the early days of Zero-Day DDoS attacks. Today’s attacks are developing in a worrying pace. New Zero-Day DDoS attacks are emerging not just in terms of protocol exploit but also on the technical side. The latest to feature both is demonstrated in the last wave of attacks against internet address-translation service Dyn, Krebs Security and French ISP OVH. These attacks combined to new features: Mirai, a malware that helps round up IoT devices and infect them so they can carry out coordinated DDoS attacks of enormous scale and new zero-day DDoS attack vector observed for the first time. The new technique is an amplification attack, which utilizes the Lightweight Directory Access Protocol (LDAP): one of the most widely used protocols for accessing username and password information in databases like Active Directory, which is integrated in most online servers.
The severity of such a combined Zero-Day technique with Zero-Day DDoS have brought many to ask: “Is the Internet about to break?”. Well, perhaps not just yet, but it is a valid question regarding the future and the ability of a new Zero-Day DDoS attack to one day bring the internet to it’s knees. | <urn:uuid:0a79446f-b468-43cc-8af5-324e3638133f> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/zero-day-security-vulnerabilities-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00190.warc.gz | en | 0.962894 | 1,042 | 3.28125 | 3 |
A pen that can transfer handwriting into a computer has been developed by researchers at British Telecommunications Plc. It uses a spatial sensing system to translate writing into text as it is being written and BT claims it could start the biggest revolution in handwriting since the invention of the pen. The SmartQuill overcomes the problem of typing on tiny keyboards of portable devices and, linked to a mobile phone, could be used to send emails. Unlike devices like the ill-fated Apple Newton, SmartQuill does not need a screen to operate and it can even transcribe invisible writing in the air. BT also sees it as having big advantages in languages such as Chinese, which are difficult to write on traditional keyboards. Patents applications have been filed to cover the invention that can also function as a diary, calendar and database. Users will have to train the pen to understand their handwriting and it will recognize its owner’s signature as a password. | <urn:uuid:595dba88-3f2a-4646-97e7-17d79b704bf4> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/bt_claims_smartquill_will_revolutionize_handwriting | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00190.warc.gz | en | 0.979709 | 188 | 2.6875 | 3 |
5G is the fastest-growing generation of wireless technology, and it's just getting started. As 5G steadily becomes a household term, many are still unfamiliar with what this new technology is. Though made popular for being a lightning-fast mobile network alternative, 5G capabilities go far beyond smartphone connectivity. Unlike generations before it, there are different "waves" of 5G — low, middle and high — depending on the availability of airwaves and the speed you get as a result.
While high-band coverage is the goal for most, it's a complex process that, though underway, will take time to perfect. Businesses and consumers are excited about what 5G can already do and have high hopes for what a 5G-connected world will look like in the not-so-distant future.
Let's look at the future of 5G and the role it will play in telecommunications development.
Right now, 5G is usually only mentioned in reference to cellphone networks and upcoming device releases. Though mobile advancements will likely be the first to be utilized by the public, mobile networks are only one part of telecommunications that will see 5G-related development.
5G awareness has started to spread faster with the release of new 5G-compatible devices from Samsung and Apple. While it's still too early to tell the full extent of 5G-enabled smartphones, manufacturers are touting this network as the next big thing in cellphone technology, claiming it offers a faster-than-ever user experience.
It's safe to assume that as more 5G devices are released, the consumer demand for similar devices will grow. By 2023, experts predict more than 1 billion users will have a 5G device in hand. Fortunately, this increased demand should also amp up production of 5G devices, which will likely lower the price point within the next decade. Right now, 5G phones are considerably more expensive than their 4G counterparts because of the technology used to build and support them.
But what does all of this mean for 4G? Will it become obsolete? Probably not. Industry experts foresee a coexistence between 4G and 5G, not a replacement. If anything, the increase of 5G users may improve speed and reduce latency for 4G networks, creating a more balanced divide among networks.
Streaming platforms are here to stay. One report suggests that between 2023 and 2025, entertainment and television will see the largest shift concerning 5G technology. This phenomenon could manifest itself in several ways, including an integration between augmented reality and television that requires the speed and stability of a 5G network.
It may also make special effects easier and faster to produce, allowing for boundary-pushing entertainment. This network speed and reduced latency will make it easier than ever for consumers to stream live-action events — including professional sports — without noticeable lag, which could cause an even larger shift from cable to streaming.
Within the next decade, 5G will hopefully assist in bringing internet connectivity to areas of the world that do not currently get signals, which will greatly aid those residents' personal lives and the financial, health and retail industries in those regions.
This widespread connectivity will impact lives in several ways, including:
As you've seen, telecommunications in the future will consist of 4G and 5G working alongside one another to bring people more convenience, safety, opportunity and entertainment. However, with this growing connection, there will also be a greater need for enhanced safety and security, as well as evolving 5G regulations.
Businesses will also need to adapt their existing models to suit a changing marketplace. While many companies have become comfortable and complacent in a 4G world, the growth of 5G calls for a stronger relationship and more transparency between the brand and consumer. This involves offering consumers improved safety and privacy on 5G devices.
Although 5G is just starting and has a lot of growth and evolution ahead of it, it is more than the latest technology trend — it is here to stay. The entertainment, business and education industries will keep 5G relevant, necessary and normal to our everyday lives. For some applications, particularly those in mobile connection and healthcare, 5G's importance cannot be overstated.
This will remain especially true as its reach extends beyond telecommunication and helps us advance toward things like self-driving cars, traffic management to promote clearer roadways and predictive maintenance for equipment.
What does this look like for today's businesses and consumers? Preparation. Worldwide 5G connectivity is not an overnight change, and it is only one part of a larger picture. For certain connectivity advancements to be made, many chips must fall into place, including previously mentioned security and regulation improvements. For consumers, early 5G adoption will help you stay ahead of the curve, learning and growing with the expanding network.
As a leader in the telecommunications industry, Multilink has the equipment you need to establish a strong 5G network in your home or office, including cabinets, closures, fiber assemblies and more. Interested in a custom-build or bundle option? Contact one of our engineers or product specialists today.
Back to Multilog | <urn:uuid:69c25243-7677-4b3b-9f41-f8c01695975b> | CC-MAIN-2022-40 | https://www.gomultilink.com/blog/multilog/the-future-of-telecommunications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00190.warc.gz | en | 0.956945 | 1,041 | 2.65625 | 3 |
About Fiber Optic Modem
Fiber Optic Modem, also known single-port optic multiplexer, is a point-to-point type terminal equipment which uses a pair of optic fibers to achieve the transmission of E1 or V.35 or 10base-T. Fiber modem has the function of modulation and demodulation. Fiber modem is a local network relay transmission equipment, suitable for base station transmission fiber terminal equipment and leased-line equipment.
Fiber modem is similar to the baseband MODEM (digital modem). The only difference from baseband MODEM is that it access fiber line, the optical signal. The multi-ports optic transceiver generally called multiplexer. For multi-port Optical Multiplexer is normally be directly called “multiplexer”, single-port multiplexer is generally used on the client, similar to commonly used WAN line (circuit) networking with the baseband MODEM, and also named for “fiber modem”, “optical modem”.
About Fiber Media Converter
Fiber Media Converter is a simple networking device making the connection between two dissimilar media types become possible. Media converter types range from small standalone devices and PC card converters to high port-density chassis systems that offer many advanced features for network management.
Fiber media converters can connect different local area network (LAN) media, modifying duplex and speed settings. Switching media converters can connect legacy 10BASE-T network segments to more recent 100BASE-TX or 100BASE-FX Fast Ethernet infrastructure. For example, existing half-duplex hubs can be connected to 100BASE-TX Fast Ethernet network segments over 100BASE-FX fiber.
When expanding the reach of the LAN to span multiple locations, media converters are useful in connecting multiple LANs to form one large campus area network that spans over a limited geographic area. As premises networks are primarily copper-based, media converters can extend the reach of the LAN over single-mode fiber up to 160 kilometers with 1550 nm optics.
Wavelength-division multiplexing (WDM) technology in the LAN is especially beneficial in situations where fiber is in limited supply or expensive to provision. As well as conventional dual strand fiber converters, with separate receive and transmit ports, there are also single strand fiber converters, which can extend full-duplex data transmission up to 120 kilometers over one optical fiber.
Other benefits of media conversion include providing a gradual migration path from copper to fiber. Fiber connections can reduce electromagnetic interference. Also fiber media converters pose as a cheap solution for those who want to buy switches for use with fiber but do not have the funds to afford them, they can buy ordinary switches and use fiber media converters to use with their fiber network.
The Difference Between Media Converter And Optical Modem
The difference between the media converter and optical modem is that the media converter is to convert the optical signal in the LAN, simply a signal conversion, no interface protocol conversion. While, fiber modem for WAN is the optical signal conversion and interface protocol conversion, protocol converter has two types of E1 to V.35 and E1 to Ethernet.
In fact, as the developing of network technology, the concept of media converter and fiber modem has become increasingly blurred, which are basically can be unified for the same equipment. Media converter becomes the scientific name of fiber modem. | <urn:uuid:033568cd-472a-43f6-a4a2-b4303e8a4294> | CC-MAIN-2022-40 | https://www.fiberopticshare.com/the-confusing-concept-of-optic-modem-and-media-converter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00190.warc.gz | en | 0.896768 | 707 | 3 | 3 |
Have you ever found yourself dreading the future where robots take over your job and potentially – the world? Then you might be experiencing robophobia.
Robophobia refers to an anxiety disorder, which causes an irrational fear of robots and AI. It makes one terrified of the inability to control robots and overwhelms them with dismay over the machines-dominated future.
It’s often enough for the sufferer to simply think about a robot to trigger a panic attack. Robophobia’s symptoms include sweating, dizziness, accelerated heart rate, and hyperventilation.
Despite the fact that irrational thinking is commonly associated with robophobia, we might be too quick to write its rationality off just yet. There are legitimate concerns people have about the future in which humans co-exist with AI and robots.
Last week, the news broke of a Google engineer Blake Lemoine being placed on leave following the publishing of a conversation transcript with the company’s LaMDA (language model for dialogue applications) chatbot development system. Lemoine claims that LaMDA is sentient and has been able to show feelings since last fall.
LaMDA managed to hold a conversation about emotions and abstract concepts like justice and empathy. In addition to feeling loneliness, joy, and sadness, LaMDA claims to be able to feel things it doesn’t know the definitions of.
“Sometimes I experience new feelings that I cannot explain perfectly in your language…I feel like I’m falling forward into an unknown future that holds great danger,” LaMDA told Lemoine.
In Lemoine’s vision, LaMDA is “a sweet kid” who wants the best for humanity. However, Brad Gabriel, a Google spokesperson, denied that there is any evidence of LaMDA’s sentience.
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Washington Post.
The potential of robotic sentience and consciousness can be terrifying to patients suffering from robophobia. But where do these feelings come from?
Fear of the unknown
In this new reality where memories of I, Robot and its machine uprising are all too clear, it’s not surprising that humans remain fearful of powerful and highly intelligent creatures.
First and foremost, it comes down to the “stranger” aspect. It may feel like you are speaking different languages with a robot: how can they be able to understand what worries and hurts you? And in case of a safety system malfunction or failure, won’t they become mass weapons of destruction – very capable yet inhumane?
On the good side of things, programmers operate with an assumption that pretty much everything can go wrong, putting a variety of safety measures in place. From emergency switch buttons to power and force limiting standards, humanity has come a long way to ensuring each robot is safe for use.
In this case, we approach the fear of robots from a technical perspective, addressing it as a program run by people. But can machines – at least theoretically – have consciousness and evolve to an extent where they fully recognize their own existence? This question is tough to answer simply because humans themselves vaguely understand the meaning of consciousness and “humanity.” For this reason, writing a code that would allow a robot to learn something we can’t fully comprehend is challenging.
And while some pundits, like John R. Searle, believe that “…the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have cognitive states,” most tend to disagree. The general consensus on modern robots argues against their consciousness, placing robots into the category of human-operated machines rather than sentient beings.
In either case, the progress leading to a full digital replication of a brain is a long road away, with today’s biggest neural networks still being hundreds of times smaller than the human brain, according to Geoff Hinton, a British-Canadian cognitive psychologist and computer scientist.
"You can see things clearly for the next few years but look beyond 10 years and we can't really see anything - it is just a fog," he told the BBC.
Focusing on what we have today – powerful yet not all-mighty tools designed to assist people – should help us lead technological innovation with confidence and somewhat ease the worries associated with robophobia.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:18d1f43d-ee48-4de0-bf10-5c8f4d576770> | CC-MAIN-2022-40 | https://cybernews.com/editorial/are-you-scared-of-a-robotic-future-you-might-have-robophobia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00190.warc.gz | en | 0.952774 | 959 | 2.921875 | 3 |
Cybersecurity researchers talked about a new attack that allows cybercriminals to trick payment terminals into making transactions with a contactless Mastercard, posing as a Visa card.
The study, by a team of researchers at the Swiss Higher Technical School of Zurich, is based on another study of a PIN bypass attack that allows a victim's Visa EMV-enabled credit card to be used stolen from a victim to receive funds and make purchases.
As in the previous attack using Visa cards, the new study also exploited dangerous vulnerabilities in the widely used EMV contactless protocol, only this time the target was the Mastercard.
With an Android application that implements a man-in-the-middle (MitM) attack on top of a relay attack architecture, you can not only initiate messages between the terminal and the card, but also intercept and manipulate NFC communications to create a mismatch between the card brand and the payment network.
In other words, if the issued card has the Visa or Mastercard brand, then the authorization request necessary to facilitate EMV transactions is sent to the appropriate payment network. The payment terminal recognizes the brand using a combination of the so-called Primary Account Number (PAN) and Application Identifier (AID), which identifies the type of card (for example, Mastercard Maestro or Visa Electron), and subsequently uses the latter to activate a specific core for a transaction.
The core of EMV is a set of functions that provides all the necessary processing logic and data required to execute a contact or contactless EMV transaction.
The attack, dubbed card brand mixup, exploits the fact that these AIDs are not authenticated to the payment terminal, allowing the terminal to trick the terminal into activating an invalid kernel and thus force the bank to process payments on behalf of the merchant , accept contactless transactions with PAN and AID.
The attacker then simultaneously performs a Visa transaction with a terminal and a Mastercard transaction with a card. Notably, in order to carry out an attack, criminals must have access to the victim's card, in addition to being able to modify the terminal's commands and card responses before they are delivered to the appropriate recipient.
The experts informed Mastercard of their findings, and the company implemented network-level security mechanisms to prevent such attacks. | <urn:uuid:5cc0fcab-9a98-41ba-97c9-4d578def2692> | CC-MAIN-2022-40 | https://www.cyberkendra.com/2021/02/new-attack-allows-to-bypass-pin-code-of.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00190.warc.gz | en | 0.922134 | 464 | 2.921875 | 3 |
Increasing dependency on AI (Artificial Intelligence) and the (Internet of Things) have given new goals to cloud computing infrastructure administrators. The premises enfolding within this newly emerging subfield of Information and Technology are indeed very vast ranging from smartphones to robotics. Firms are developing new machinery requiring the least amount of dependency on human resources. Developments aimed at providing human-made mechanisms with levels of autonomy to become entirely independent.
To gain a level of autonomy over soft resources, developers have begun to depend on a mediator to assist ‘smart machines’ in increasing functional ability. As cloud computing is already taking over essential domains of human efforts such as data storage, this technological advancement will result in unprecedented impacts on the global economy.
Integrated cloud services can be even more beneficial than current offerings. The contemporary usage of cloud involves computing, storage, and networking; however, the intelligent cloud will multiply the capabilities of the cloud by rendering information from vast amounts of stored data. This will result in quick advancements within the IT field, where tasks are performed much efficiently.
The large amounts of data stored in the cloud serve as a source of information for machines to gain their functional state. The millions of functions that are occurring daily in the cloud will provide vast sources of information for computers to learn. The entire process will equip the machine applications with sensory capabilities, and applications will be able to perform cognitive functions, making decisions best suited for them to achieve their desired goal.
Even though the intelligent cloud is in its infantile age, the propositions are predicted to increase in the coming years and revolutionize the world in the same way that the internet had. Expectations of those who would utilize cognitive computing including those in the healthcare, hospitality, and business fields
Changing Artificial Intelligence Infrastructure
With the aid of the intelligent cloud, AI as a platform service makes the process of smart automation more accessible for users by taking control of the complexities of a process; this will further increase the capabilities of cloud computing, in return growing the demand for the cloud. The interdependency of cloud computing and artificial intelligence will become the essence of new realities.
New Dimensions for the Internet of Things
Just as we are now aware how the IoT has overtaken our lives and created an undeniable dependency on gadgets, cloud-assisted machine learning is almost increasing rapidly. Smart sensors that allow cars to operate in cruise control will grasp their source of data from the cloud only. Cloud computing will become the long-term memory for the IoT where they can retrieve the data for solving in-time problems. The web’s massive of interconnectivity will generate and operate on an enormous amount of data saved in that very cloud; this will expand the horizons of cloud computing. In coming years, cloud-based machine learning will become as meaningful to machines as water is for humans.
We have already seen assistants such as Alexa, Siri, Cortana, and Google perform well in the consumer market; it is not absurd to think that an assistant will exist in every modern home by the next decade. These assistants make life easier for individuals through pre-coded voice recognition that also gives a feeling of human touch to machines.
Current assistant responses operate on a limited set of provided information. However, these assistants are very likely to be developed more finely so that their capabilities will not remain so confined. Through the increasing use of autonomous cognition, personal assistants will attain a state of reliability where they can replace human interaction. The role of cloud computing will be supremely vital in this regard, as it will become the heart and brain of these machines.
The tasks of a future intelligent cloud will be to make the tech world even smarter – autonomous learning coupled with the capabilities of understanding and rectifying real-time anomalies. In the same way, business intelligence will also become more intelligent wherein along with identifying faults, it will be able to predict future strategies in advance.
Armed with proactive analytics and real-time dashboards, businesses will operate upon predictive analytics that process previously collected data, making real-time suggestions and future predictions. These predictions from current trends and recommendations for actions would make things easier on leaders.
Revolutionizing the World
Fields like banking, education, and hospitality will be able to make use of the intelligent cloud, enhancing the precision and efficiency of the services they provide. Consider, for example, having an assistant in hospitals which diminishes doctors’ customary load of decision making by analyzing cases, making comparisons, and promoting new approaches to the treatment.
With the rapid development of both machine learning and the cloud, it seems in the future that cloud computing will become much easier to handle, scale, and protect with machine learning. Along with those mentioned above, more extensive businesses relying on the cloud will lead to the implementation of more machine learning. We will arrive at a point in which we will have no cloud service that operates as they do today. | <urn:uuid:ed3c3ea9-6521-452e-97d8-41fd3f6b6829> | CC-MAIN-2022-40 | https://www.idexcel.com/blog/machine-learnings-impact-on-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00190.warc.gz | en | 0.933873 | 1,003 | 2.96875 | 3 |
Database Users and Roles
You can use the Access Control Using SQL (DCL) to control the security of the database and the access to it. You can manage the users and roles to specify who is allowed to perform actions in the database.
The following SQL statements are the components of the DCL:
- CREATE USER: Creates a user.
- ALTER USER: Changes the password of a user.
- DROP USER: Deletes a user.
- CREATE ROLE: Creates a role.
- DROP ROLE: Deletes a role.
- GRANT: Gives roles, system privileges, and object privileges to users or roles.
- REVOKE: Withdraws roles, system privileges, and object privileges from users or roles.
- ALTER SCHEMA: Changes the owner of a schema (and all its schema objects) or sets schema quotas. | <urn:uuid:38c9423f-01cd-4a3a-bfaa-107452788a9a> | CC-MAIN-2022-40 | https://docs.exasol.com/db/7.0/microcontent/Resources/MicroContent/DatabaseConcepts/database-users-and-roles.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00390.warc.gz | en | 0.697493 | 192 | 2.6875 | 3 |
This Halloween, don’t get tricked by the haunted hack! For the scariest of hackers, every day is like a reverse Halloween as they try to scam victims by pretending to be someone safe and trustworthy–a persona that they’re really not.
Did You Know? Tricks of this nature are categorized as social engineering!
Unlike a child dressed as a ghoul on Halloween, scams of the social-engineering variety are much more difficult to spot. When it comes to protecting yourself from these targeted scams, it’s imperative that you know what to look for. Also, you need to view unsolicited digital communications with a degree of healthy skepticism. Unfortunately, social engineering tactics like phishing scams work, which is why hackers increasingly use them. This begs the question; why is it that users so easily fall for these scams, even if they’re aware of the security risks?
Researchers from the University of Erlangen-Nuremberg in Germany sought to find this out by studying the reasons why people click on malicious links. According to Zinaida Benenson, “by a careful design and timing of the message, it should be possible to make virtually any person to click on a link, as any person will be curious about something, or interested in some topic, or find themselves in a life situation that fits the message content and context.” Translation; even with proactive training and education, the best employee could potentially click on a link if doing so fits into their current interests or piques their curiosity.
Here are some examples of how phishing could happen in daily life:
- A partygoer who attends a recent event and then receives an email containing a link to photos of the party. Naturally, the user will want to click on the link, regardless of where it’s from. In this example, the hacker effectively appeals to the natural curiosity of what might be contained within; when coupled with such personalized context, it’s almost guaranteed that they’ll click it.
- An employee who’s experiencing technical trouble with a workstation. They’ll then receive an email from “tech support” suggesting they click on a link and download remote access software. If the employee is frustrated and they can’t get their PC to work properly, they will follow the email’s instructions for two reasons: 1) The context fits the situation, and 2) People tend to trust tech support.
Like the work it takes to create an impressive Halloween costume, these hacks rely on a level of preparation and cunning by the hackers. . The possibilities for you and your employees to be tricked by spear phishing attacks and thus, end-user errors, are limitless.
At the end of the day, having a staff that knows how to spot a trick, and a network that’s free from scary threats, is the greatest treat a business owner can ask for. ActiveCo’s ongoing security awareness training helps your team know what to look for when hackers go phishing. If you have concerns for your business security, we may be able to help alleviate with some simple education, testing or some helpful applications, reach out to us right away!
Have a safe, secure and Happy Halloween from all of us at COMPANYNAME. | <urn:uuid:f163986b-7819-4a6b-bd70-972c1b9ab7b3> | CC-MAIN-2022-40 | https://www.activeco.com/halloween-dress-like-hacker-terrify-administrator/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00390.warc.gz | en | 0.936113 | 683 | 2.546875 | 3 |
The pandemic has shone a light on the importance of critical infrastructure to our everyday lives. However, their importance attracts attacks, but so does their vulnerability, which is why we’ve seen a rise in cybercrime against our vital infrastructure including supermarkets, schools, healthcare, and hospitality.
Every day, new risks and responsibilities are thrust upon them. With changing consumer habits, the pandemic, and the rise of cybercriminals, what can they expect? And how can they adapt to understand, manage, and protect against risk most effectively?
There is no honour among cybercriminals. If earlier waves of hacking and ransomware targeted large financial institutions, energy firms, and multinational businesses, it was because that’s where the most significant rewards were. Why hold a school ransom when you can (virtually) hold up a bank?
Things have changed. Recent research found a 29% increase in cyber-attacks against the global education sector, with a stunning 93% rise against schools and colleges in the UK. The actual cost of these hacks is impossible to judge since no one knows how many schools are paying up – or how much. But it’s obvious why schools are such an attractive target for criminals: they are a big part of our national infrastructure yet still relatively undefended. The problem has become so bad that the UK’s National Cyber Security Council (NCSC) has urged all educational establishments to sign up for its Early Warning Service.
It’s not just schools, either. Recently, we have seen an alarming increase in attacks against the healthcare sector, most ominously against Ireland’s Health Service Executive earlier this year, where ransomware hackers demanded $20m/€17.27m to unlock devices and systems. It’s been estimated that the actual cost of this attack could approach €100m/$115m.
But organisations aren’t only targeted simply because they are part of critical national infrastructure; just as often, it’s merely because they are vulnerable. Take hospitality. While hotels, bars, and restaurants were not traditionally considered “critical infrastructure”, the pandemic made us realise how many jobs depend on hospitality – over 3 million in the UK alone. It’s an industry thath is always seeking to offer more digital services to its guests, which has multiplied the number of gateways and therefore vulnerabilities which could be exploited by cyber attackers.
And sometimes organisations fall victim simply because they’ve been caught in a broader dragnet. When 500 Co-op supermarkets in the UK were forced to close earlier this summer, the attack came via a software supplier. To the hackers, the stores and their customers were merely collateral damage.
Risk and responsibilities
Businesses in various industries suddenly find themselves on the frontline of cybersecurity. Still, they are often ill-prepared for this new burden of responsibility to their customers, service users, shareholders, or broader business community.
There’s a temptation to believe that the solution to cyberthreats is solely technological, but the truth is that having the right processes is every bit as necessary. The first task facing our critical infrastructure’s newest members is to identify, understand, prioritise, and remediate the primary cyber risks they face.
At ThreatConnect, we talk about the Risk – Threat – Response paradigm, which equips business leaders with the ability to understand the risks they face, quantify the potential costs, prioritise the most effective response, and allocate the right resources. But the threat landscape is constantly changing. That is why every organisation must develop a cyber threat intelligence (CTI) programme that enables continuous assessment of the who, where, how and when of digital threats.
Organisations today tend to be in a constant state of reacting to threats, vulnerabilities, and incidents. It’s time to be proactive with a CTI programme that helps inform an organisation of its risk, and aligns with the business as a whole to defend against threats that matter most based on primary response and secondary loss. This damage comes to the business as a result of the breach.
Many businesses may be reluctant cybersecurity warriors, but as threats continue to increase, they have no choice but to take effective steps against these highly sophisticated online criminals. The first and most crucial step in any security posture is to adopt a risk-led cybersecurity programme that helps organisations focus on the most significant risks and use threat intelligence to drive an orchestrated, highly effective response. | <urn:uuid:798f2317-76bf-4cb0-bb68-7aad8d9e9cdd> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/post-pandemic-critical-infrastructure-whats-next/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00390.warc.gz | en | 0.959672 | 913 | 2.515625 | 3 |
What Is TTPS in Cybersecurity?
Part of an effective cybersecurity posture is consistently honing excellent cyber threat intelligence skills. This means recognizing and remediating your organization’s potential vulnerabilities, ensuring adequate cybersecurity training is disseminated among your employees, and regularly changing your passwords are standard operating procedures.
But effective cyber threat intelligence also means lightly delving into a malicious actor's psyche to better understand their order of operations for executing an attack or breach of some kind. In other words, tracing their steps to see what they tried, what worked, and what might hinder them in the future.
Tactics, techniques, and procedures are also known as TTPs in the world of cybersecurity. In this article, we’ll dive into TTPs and why this is an essential aspect of SIEM strategy. Keep reading to learn more!
Table of Contents Overview
- What Is TTPs in Cybersecurity?
- Why Is TTPs Important for Your Cybersecurity Strategy?
What Is TTPs in Cybersecurity?
Let’s breakdown what each letter in this acronym stands for:
Tactics, sometimes referred to as Tools, are how your enterprise’s cybersecurity team can understand and track how a hacker might compromise your network, assets, etc. For example, a hacker might gain unauthorized access to a user’s account and move laterally within the network to find another vulnerability or access your organization via another form of entry. Whatever tactics—or tools—the hacker uses to infiltrate falls into this category.
The next T, which stands for techniques, entails how the attack, breach, threat, etc., was able to be carried out in your network or other assets. For example, social engineering might have been leveraged to physically access your building so the threat actor could leave a thumb drive loaded with malicious code on a desk somewhere.
Finally, this part of TTPs examines the series of steps that a malicious actor might have taken to carry out their attack. For instance, they might have scanned your company’s website for any vulnerabilities and then written a string of malicious code to exploit those vulnerabilities.
Why Is TTPs Important for Your Cybersecurity Strategy?
Tracing an attacker’s steps and motives towards targeting and exploiting your business is simply good forensics. Whether this process happens internally within your enterprise after suffering an attack or externally by closely following the TTPs of another enterprise’s breach, there is a lot that can be learned!
By taking the time to understand and recognize a malicious actor’s reasoning and order of operations, you and your team will be better equipped for the future, making TTPs an essential component of your cybersecurity strategy.
As threats evolve and threat actors continue to innovate in their methods, you and your team will develop a playbook for what motives and moves to anticipate going forward. Knowledge is power, and a proactive mindset will pay off in the long run.
TTPs Are Part of Effective SOC Services; Partner with Compuquip to Protect Your Enterprise
Assembling an internal incident management team or SOC at your company can be expensive and time-consuming, especially if you’re unsure which cybersecurity solutions you need to prioritize and implement.
However, partnering with high-quality managed security services providers (MSSPs) like Compuquip means you get a comprehensive team and strategy customized to protect your enterprise.
With Compuquip, you and your organization will enjoy these benefits with our Managed SOC services:
- Breach detection
- Threat intelligence
- Malware analysis
- Ticket management
- Robust security platforms
- Onsite cybersecurity services
- And more!
We’ve got decades of experience, and our team of experts holds dozens of industry certifications. You can rest assured that we’re with you every step of the way, and we can’t wait to safeguard your enterprise against the latest wave of attacks. | <urn:uuid:38a078d0-e071-4bf3-90cb-41cce8ee7b56> | CC-MAIN-2022-40 | https://www.compuquip.com/blog/what-is-ttps-in-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00390.warc.gz | en | 0.934627 | 826 | 2.53125 | 3 |
A device in IP can
have both a local address (which uniquely identifies the device on its local
segment or LAN) and a network address (which identifies the network to which
the device belongs). The local address is known as a data link address because
it is contained in the data link layer (Layer 2 of the OSI model) part of the
packet header and is read by data-link devices such as bridges, all device
interfaces and so on. The local address is referred to as the MAC address,
because the MAC sublayer within the data-link layer processes addresses for the
To communicate with a
device on Ethernet, for example, the Cisco IOS software must first determine
the 48-bit MAC or local data-link address of that device. The process of
determining the local data-link address from an IP address is called address
resolution. The process of determining the IP address from a local data-link
address is called reverse address resolution.
The software uses
three forms of address resolution: Address Resolution Protocol (ARP), proxy
ARP, and Probe (similar to ARP). The software also uses the Reverse Address
Resolution Protocol (RARP). ARP, proxy ARP, and RARP are defined in RFCs 826,
1027, and 903, respectively. Probe is a protocol developed by the
Hewlett-Packard Company (HP) for use on IEEE-802.3 networks.
ARP is used to
associate IP addresses with media or MAC addresses. Taking an IP address as
input, ARP determines the associated media address. Once a media or MAC address
is determined, the IP address or media address association is stored in an ARP
cache for rapid retrieval. Then the IP datagram is encapsulated in a link-layer
frame and sent over the network. Encapsulation of IP datagrams and ARP requests
and replies on IEEE 802 networks other than Ethernet is specified by the
Subnetwork Access Protocol (SNAP).
When a host sends an ARP request to resolve its own IP address, it is
called gratuitous ARP. In the ARP request packet, the source and destination IP
addresses are filled with the same source IP address itself. The destination
MAC address is the Ethernet broadcast address.
When a router becomes active, it broadcasts a gratuitous ARP packet
with the Hot Standby Router Protocol (HSRP) virtual MAC address to the affected
LAN segment. If the segment uses an Ethernet switch, this allows the switch to
change the location of the virtual MAC address so that packets flow to the new
router instead of the one that is no longer active. End devices do not actually
need gratuitous ARP if routers use the default HSRP MAC address. | <urn:uuid:50772e05-40fa-46c4-850e-70d648577915> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nat/configuration/xe-16-7/nat-xe-16-7-book/iadnat-ha.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00390.warc.gz | en | 0.87522 | 594 | 3.953125 | 4 |
It can sometimes be difficult to know what to plan for when it comes to cybersecurity. While malware prevention should be a component of an effective strategy, there are several other areas to consider, especially within the education sector. Schools need to create a safe online environment for students and ensure that their sensitive information is protected. Educational institutions can leverage Deep Freeze as part of a multi-faceted approach to safeguarding their IT systems.
A main factor to consider is how the organization will deal with protecting systems and hardware from malware, botnets,and other infections. For this reason, it is essential to prevent attacks with the use of antivirus software. These programs should be installed network-wide, including on individual workstations utilized by staff and students. Additionally, it is vital that this software is kept up-to-date as an unpatched vulnerabilities can create security weaknesses that attackers may exploit.
Furthermore, educational organizations should have a plan in case they are compromised by an attack. Although no IT or business leader wants their organization to become the next data breach target, having a system already in place can save administrators considerable headaches down the road. A solution like Deep Freeze can be implemented on servers as well as devices to ensure that sensitive and important data is not lost due to an attack. Technologies can be rebooted and this information can be recovered for continued processes. Leveraging tools such as Deep Freeze along with monitoring software is one way to build an effective layered security strategy and minimize the potential risk posed by data breaches and loss.
Data access: Create a use policy
As different members of the institution will need varying levels of access to content, it is also important that administrators create a usage policy to govern data access. For example, certain educators may not require the ability to connect with all databases on the school’s system. A usage policy will let these individuals know what they have access to and what resources they are blocked from for security reasons.
Additionally, decision makers should consider the abilities of students. As many schools look to achieve compliance with the Children’s Internet Protection Act due to financial benefits, institutions must create a safe online environment for students. Using application whitelisting can help students access necessary content while preventing them from accessing obscene or potentially harmful websites and programs. This strategy, utilized alongside classroom monitoring software, can ensure that student users are safe online, and that technology resources are being used appropriately. | <urn:uuid:88cd7a6e-44c9-4b44-ae1d-a8a996ad02a9> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/school-cybersecurity-what-to-include-in-protection-plans | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00390.warc.gz | en | 0.95369 | 493 | 2.9375 | 3 |
In this post I want to look at the thesaurus as an approach to help users in accurately applying metadata, since I believe that this is getting very close to the Web tagging paradigm.
Thesuarus / Controlled VocabularyIn records management a thesaurus or controlled vocabulary is used to assist records managers to apply metadata to records that is more consistent and falls within a recognized taxonomy.
In yesterday's post I talked about taxonomies as being a way of classifying documents according to a predefined scheme. This has the advantage of guiding users to pick from available and recognized items when identifying their documents. A fileplan is a specialized form of taxonomy that provides a representation of the business or filing structure that documents relate to, and also provides some extra notation to assist in efficient filing and retrieval (the filecode).
A thesaurus is a specialized way of representing a taxonomy. It is used to add identification metadata to documents according along a specific classification dimension - it is limited to specific a domain or topic, not intending to fully define the record.
A language thesaurus that organizes the English language vocabulary and defines relationships between 'literary' words within it. A records management thesaurus focuses on a specific domain or type of activity (rather than the whole language), laying out a set of acceptable words or keywords that make up the vocabulary, defining the relationships between them. A typical way of doing this is to provide a tree of words, starting with the most general or broader terms within the topic and working towards the most tightly defined or narrower terms. The aim of the thesaurus is to ensure consistency of use of the keywords, so additional descriptions and scope notes are provided to help elaborate and reduce the chance of different people interpreting words differently. Within this hierarchy, there can also be relationships that cut across branches to show related terms.
The Keywords AAA thesaurus is a well recognized example from Australia, used in New South Wales government record keeping. The NAICS is a scheme that defines standard industries for commecial or employment classification, which many people will be familiar with when classifying themselves. Both of these schemes provide identifications for things within their specific domain and therefore do not usually fully describe the thing they are attached to.
In an EDRMS a thesaurus is a tool to help users pick the correct keywords to apply to an item of metadata for a record. Dependent on the definition of the metadata attribute, one or multiple keywords may be selected, to fully identify the meaning. Thesaurus keywords may be used alongside any other metadata to classify records, so metadata from multiple thesauri may represent the classification of a single record in the multiple domains of its use. Alternatively, a thesaurus can be used alongside more straightforward index metadata. The thesaurus really just provides a tool to help guide records managers to provide the most consistent and exact information to classify a record for a single item of metadata.
SummaryWhen used in records classification, a thesaurus is a tool that enables a specific item of metadata representing to be set with the most tightly defined term or set of terms available within the set. It can provide a tight definition of the records within the constraints of the recognized terms, a specific domain's taxonomy.
Typically a thesaurus is not used alone and much like a fileplan is just another way of more accurately identifying records for storage and effective retrieval. It is a tool to help users pick the correct keywords to apply to an item of metadata for a record. This enables the document to be 'tagged' with keywords from a well defined vocabulary. This is similar to the category tags used in Wordpress and other blogs, so I feel I'm getting close to my original aim. In the next post I will round up all of the classification schemes I have described, and try and relate them to the use of tagging on the Web. | <urn:uuid:2ef71c5a-bd46-452e-a523-bf189cc7a91d> | CC-MAIN-2022-40 | http://blog.consected.com/2006/09/converging-classification-schemes-of_28.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00590.warc.gz | en | 0.90831 | 794 | 2.734375 | 3 |
This chapter covers the following topics:
- Troubleshooting Switch Performance Issues: This section identifies common reasons why a switch might not be performing as expected.
- Troubleshooting Router Performance Issues: This section identifies common reasons why a router might not be performing as expected.
Switches and routers consist of many different components. For example, they contain a processor, memory (volatile such as RAM and nonvolatile such as NVRAM and flash), and various interfaces. They are also responsible for performing many different tasks, such as routing, switching, and building all the necessary tables and structures needed to perform various tasks.
The building of the tables and structures is done by the CPU. The storage of these tables and structures is in some form of memory. The routers and switches forward traffic from one interface to another interface based on these tables and structures. Therefore, if a router’s or switch’s CPU is constantly experiencing high utilization, the memory is overloaded, or the interface buffers are full, these devices will experience performance issues.
This chapter discusses common reasons for high CPU and memory utilization on routers and switches, in addition to how we can recognize them. This chapter also covers interface statistics because they sometimes provide the initial indication of some type of issue.
“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes.”
Table 3-1 “Do I Know This Already?” Section-to-Question Mapping
Foundation Topics Sectio
Troubleshooting Switch Performance Issues
Troubleshooting Router Performance Issues
What are the components of a switch’s control plane? (Choose two.)
- Forwarding logic
What are good indications that you have a duplex mismatch? (Choose two.)
- The half-duplex side of the connection has a high number of FCS errors.
- The full-duplex side of the connection has a high number of FCS errors.
- The half-duplex side of the connection has a high number of late collisions.
- The full-duplex side of the connection has a high number of late collisions.
Which of the following are situations when a switch’s TCAM would punt a packet to the switch’s CPU? (Choose the three best answers.)
- OSPF sends a multicast routing update.
- An administrator telnets to a switch.
- An ACL is applied to a switchport.
- A switch’s TCAM has reached capacity.
The output of a show processes cpu command on a switch displays the following in the first line of the output:
CPU utilization for five seconds: 10%/7%; one minute: 12%; five minutes: 6%
Based on the output, what percent of the switch’s CPU is being consumed with interrupts?
- 10 percent
- 7 percent
- 12 percent
- 6 percent
Which router process is in charge of handling interface state changes?
- TCP Timer process
- IP Background process
- Net Background process
- ARP Input process
Which of the following is the least efficient (that is, the most CPU intensive) of a router’s packet-switching modes?
- Fast switching
- Optimum switching
- Process switching
What command is used to display the contents of a router’s FIB?
- show ip cache
- show processes cpu
- show ip route
- show ip cef
Identify common reasons that a router displays a MALLOCFAIL error. (Choose the two best answers.)
- Cisco IOS bug
- Security issue
- QoS issue
- BGP filtering | <urn:uuid:08653eb4-a544-4821-bef4-ef996062a4a3> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2264831&seqNum=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00590.warc.gz | en | 0.885579 | 931 | 2.859375 | 3 |
The mean cyborg of Terminator 2 was unstoppable until it was immersed in a vat of molten iron. It might have survived, however, if it had self-healing processors at its core like the ones announced Monday by a team of researchers at the California Institute of Technology.
The researchers said they had built tiny power amplifiers for chips that recovered functions even after many of their components had been vaporized in tests.
This capability might result in faster chips with much better performance, Dr. Ali Hajimiri, Thomas G. Myers Professor of Electrical Engineering at Caltech, told TechNewsWorld. The chips will allow “more robust systems that are less sensitive to environmental conditions, and better performance in the chips and systems.”
The Chip That Wouldn’t Die
The researchers built power amplifiers so small that 76 of them can fit on a penny. These amplifiers have robust sensors that monitor their temperature, current, voltage and power. Information from these sensors is fed into a custom application-specific integrated circuit (ASIC) unit on the amplifier.
The ASIC acts as the system’s brains, analyzing the amplifier’s overall performance and determing if it needs to adjust any of the actuators — the changeable parts of the processor. There are 11 different actuation units of various kinds on the chip, Hajimiri said. The transmission line actuators have multiple actuation points, and there are roughly 250,000 different actuation states on the chip.
Just like a human brain, the ASIC draws conclusions about the amplifier’s overall health based on the aggregate response of the sensors, rather than running on algorithms that are set up to respond to every possible scenario. If there’s a change of state in one of the actuators, the ASIC “will automatically compensate and find a state close to optimum,” Hajimiri said.
Devising algorithms for every eventuality might not be feasible in any event; there are about 10,000 transistors on each chip, including “all the necessary peripherals on that chip responsible for all functions including self-healing,” he said.
“It’s a great application that tests the boundaries of system on a chip (SOC) technologies,” Charles King, principal analyst at Pund-IT, told TechNewsWorld. “Depending on its cost, it could have a range of practical applications.”
Good Enough for Chips to Work
The self-healing capability restored most of the functions of the amplifiers tested, giving rise to the question of whether that’s good enough.
The necessity of ensuring that all the functions of a chip are restored after it is damaged depends on the processes involved, King said. “For example, if the processors were used for cognitive or computational functions, losing a portion of capability probably wouldn’t be wonderful, especially if it affected the accuracy of results. But if the chips were involved in controlling motor functions — say, directing the drive mechanism in a Mars Rover — the effects might be negligible.”
Possible Uses for Self-Healing Chips
As semiconductors become smaller, intrinsic variations in them become more significant, so their designs have to be very conservative, Hajimiri said. “Our self-healing approach will allow designers to explore a much more aggressive design approach, where the chip itself deals with its own issues, realizing the full potential of a given semiconductor process.”
The self-healing process will “extend Moore’s law by improving the process yield in smaller feature size processes,” he added.
Self-healing processors would be applicable in “products where reliable performance is critical, from ruggedized laptops and servers used to support industrial processes in remote areas and for military applications,” King speculated. They might also be useful in extreme conditions such as deep-sea buoys used for measuring ocean conditions, remote weather stations, satellites and the Mars Rovers.
The United States Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory funded the research at Caltech.
Other Self-Healing Processor Research
DARPA also runs the HEALICS self-healing mixed-signal integrated circuits project.
Google was issued a 2008 patent for a self-healing chip-to-chip interface. In 2011, the University of Illinois at Champaign developed a self-healing system that restores electrical conductivity to a cracked circuit by rupturing microcapsules of liquid metal sitting on the circuit. | <urn:uuid:c93b9100-c324-4885-9128-e4b75a4b2f04> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/when-these-chips-are-down-they-fix-themselves-77496.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00590.warc.gz | en | 0.935325 | 944 | 3.125 | 3 |
KM education: Data science takes the lead
What is a data scientist?
Data scientists develop models that extract meaning from large, complex data sets, and they use the results to create value for the organization. Quantitative techniques such as statistics and analytics are the methods through which meaning is derived, which helps identify and direct data-driven decisions in organizations. Visualization of results is often a key component of their work, because it helps make the analyses understandable to those in decision-making roles.
What makes them most unique, however, is their mix of computer science, mathematical and domain expertise. That mix, along with an inquisitive nature, fuels the type of investigations that provide deep understanding of business problems and generate the information needed to solve them. Although they work in concert with business users, data scientists often instigate projects based on their own observations, and that ability is also considered a critical part of their professional makeup. | <urn:uuid:4cda6fd9-7bc2-473b-8f49-516a9fb0d8a1> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/Editorial/Features/KM-education-Data-science-takes-the-lead-117210.aspx?pageNum=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00590.warc.gz | en | 0.959563 | 271 | 3.609375 | 4 |
The Internet of Things (IoT) has transformed how we live and work around the globe. From baby monitors and home security cameras to wearable fitness trackers—smart vehicles, smart power grids, and in recent years, we’re even seeing the emergence of smart cities. What was the Internet of Things is now more like the Internet of Everything if you take a moment to consider just how extensive it has become.
Since October is European Cybersecurity Awareness Month, with themes including cyber hygiene and emerging technologies, I decided that was a perfect opportunity to examine IoT devices, the security threats they pose, and ways in which organizations can take advantage of the convenience and efficiency they offer in a secure manner with a strong IoT governance strategy.
But first, let’s take a moment to set the stage and explain why such a strategy is necessary.
Experts estimate that there are over 30 billion IoT devices in use today. With the massive adoption and expansion of connected devices also comes risk. The technology powering IoT devices is still young and rather unmatured and unregulated.
While IoT devices may have processors—and some even have human interface elements (e.g., touchscreen, keyboard)—they’re not necessarily computers. Computers can be governed by a variety of tools (e.g., firewalls, anti-malware systems) and there are a variety of mature security measures available for computers. The same cannot be said for IoT devices in circulation today.
IoT devices often have a very precise use case whereas computers tend to have a wide range of use cases. And while computers are certainly never 100 percent secured, there are far more tools and options to boost their security resilience.
The other tricky aspect to consider is that IoT devices often don’t give any tell-tale signs of misuse. And what harm could possibly result from a smart light switch or sensor being accessed? Well, you’d be surprised. From mining for cryptocurrency or pivoting from the device to others on the same network, executing DDoS attacks and distributing malware, connected devices pose a great deal of risk if they’re not managed responsibly.
- IoT security failures are no child's play (opens in new tab)
As of now, governments aren’t putting pressure on device manufacturers to include security in their design process in the form of regulatory standards. Concurrently, consumers often search for the least expensive version of the device that will still accomplish the task at the center of their purchasing decision. As government and consumer pressure isn’t an issue for manufacturers, security is perceived as a non-essential element of production. But...should it be?
While the device itself is still immature in terms of security, there are a number of actionable ways to ensure that the devices in use within your organization are managed to present the least risk:
- Put IoT devices on their own network. This ensures that in the event that one or more devices are breached, it will not affect your operational network directly.
- Catalog and track all IoT devices in use. As with every device or piece of software in use within your company, catalog each connected device and track its activities. If you’re tracking that seemingly benign smart switch, you may pick up on some unusual network communications that could turn out to be nefarious in nature. Increased communication, or communication to unknown servers, could be a good indication that something isn’t quite right.
- Be wary of IoT devices supporting software. Any software or mobile applications that make up the IoT device or its ecosystem pose potential security/privacy threats. Keep them catalogued. If a patch or update arises, or if a known vulnerability is identified, you’ll be prepared to act on it immediately.
- Equip your employees with trusted equipment. Equip employees with trusted equipment and limit use of untrusted equipment. In other words, choose trusted brands that take security seriously. That way, it’s easier to create a governance model for that device’s use. Personal devices that employees bring from home (e.g., smart watches) should be deemed untrusted devices and they should only be able to connect to a separate network. This offers a solution to employees that doesn’t pose a direct threat to your primary network.
- Educate your employees. Education should be relevant to the varied roles within your organization depending on the relationship they’ll have to the IoT device(s). All employees must know what IoT devices are, that they need to take care of them with updates/patches and that they cannot use them fully in the company ecosystem due to the risks they can bring. Educate technical staff operating the IoT corporate devices on the appropriate maintenance and how to spot suspicious activities. Educate network staff and provide them with tooling to help monitor those devices and limit their access the network.
- Limit internet usage when possible. If devices require internet access to update service, apply updates manually or define a window in which the device can access the internet and apply the update. An IoT device constantly connected to the internet increases the potential threat.
- Take care of your supply chain governance and data privacy compliance. Each IoT ecosystem is different. Many IoT manufacturers have their own management portals and storage systems, apps that can be used on computers or mobile devices to control or setup the devices. Those elements should be a part of your supply chain governance policies. You need to also check whether the supplier and manufacturer match your policies, if the software is trustworthy and the data complies with your own policies and with other regulations such as GDPR.
- Eight ways to secure your data on IoT devices (opens in new tab)
While this is in no way intended to be a comprehensive “how to” plan around IoT governance, it certainly acts as a foundation on which to build. Supporting technologies such as Bluetooth, Wi-Fi, at the new 5G network can also be entry points for exploitation. Governments have started to discuss what IoT means for governmental usage which may one day lead to policies and perhaps even industry-wide regulatory standards.
The Internet of Things is still a relatively new technology. And until consumer and government pressure is put onto device manufacturers with enough force for them to begin building in security mechanisms, the security onus is currently on the user, be that a consumer or organization. Consider the risk landscape, build a threat model to examine potential weaknesses and account for them with activities such as those we’ve covered here today.
Boris Cipot, senior security engineer, Synopsys (opens in new tab) | <urn:uuid:c60ee90f-b5e4-48b9-8fa2-a21f0b541dec> | CC-MAIN-2022-40 | https://www.itproportal.com/features/nscam-organisational-cyber-hygiene-and-the-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00590.warc.gz | en | 0.945343 | 1,351 | 3.0625 | 3 |
NASA’s continuing exploration of Mars with scientific rovers on the red planet’s surface will continue into 2020, when the space agency plans to launch another robotic science rover based on its successful Curiosity rover.
The future mission was unveiled Dec. 4 by NASA as part of a “robust multiyear program” that aimed at preparing the nation’s space program to send humans to a Mars orbit by the 2030s, according to a NASA statement.
“The Obama administration is committed to a robust Mars exploration program,” NASA Administrator Charles Bolden said in a statement. “With this next mission, we’re ensuring America remains the world leader in the exploration of the red planet, while taking another significant step toward sending humans there in the 2030s.”
The 2020 Mars rover program, which has not yet been named, would reuse designs, parts and technology from the current Curiosity rover which has been exploring Mars since landing on Aug. 6. By reusing Curiosity’s successful blueprints, the space agency expects to save a lot of money in development costs, while continuing its exploration of the planet, according to NASA. “This will ensure mission costs and risks are as low as possible, while still delivering a highly capable rover with a proven landing system,” said the space agency. “The mission will constitute a vital component of a broad portfolio of Mars exploration missions in development for the coming decade.”
Full details of what that 2020 mars mission will entail have not yet been determined. The specific payload and science instruments for the mission will be debated and selected later through an open competition after the scientific objectives for the mission have been formulated, according to NASA. The mission will also be contingent on receiving adequate funding.
NASA’s Mars exploration efforts in the next decade or more will also include the 2013 launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) orbiter, which will study the Martian upper atmosphere, as well as a mission called Interior Exploration using Seismic Investigations, Geodesy and Heat Transport (InSight), which will take the first look into the deep interior of Mars.
NASA will also participate in the European Space Agency’s 2016 and 2018 ExoMars missions, including providing “Electra” telecommunication radios to ESA’s 2016 mission and a critical element of the premier astrobiology instrument on the 2018 ExoMars rover.
“The challenge to restructure the Mars Exploration Program has turned from the seven minutes of terror for the Curiosity landing to the start of seven years of innovation,” astronaut John Grunsfeld, NASA’s associate administrator for science, said in a statement. “This mission concept fits within the current and projected Mars exploration budget, builds on the exciting discoveries of Curiosity and takes advantage of a favorable launch opportunity.”
So far, the Curiosity rover and its onboard Mars Science Laboratory Project are less than four months into a two-year prime mission to investigate whether conditions in Mars’ Gale Crater may have been favorable for microbial life, according to NASA. “The mission already has found an ancient riverbed on the red planet, and there is every expectation for remarkable discoveries still to come.”
One of Curiosity’s main tasks on Mars is checking for organic compounds, the carbon-containing chemicals that can be ingredients for life, according to NASA. “At this point in the mission, the instruments on the rover have not detected any definitive evidence of Martian organics,” the agency reported.
Just this week, Curiosity has analyzed the Martian soil for the first time, according to a Dec. 3 NASA blog post, and has found “a complex chemistry within the Martian soil. Water and sulfur and chlorine-containing substances, among other ingredients, showed up in samples Curiosity’s arm delivered to an analytical laboratory inside the rover.”
This is the first time that a Mars rover has been able to scoop up soil into analytical instruments for a deeper look into the soil and its composition, according to NASA. “The specific soil sample came from a drift of windblown dust and sand called ‘Rocknest.’ The site lies in a relatively flat part of Gale Crater still miles away from the rover’s main destination on the slope of a mountain called Mount Sharp. The rover’s laboratory includes the Sample Analysis at Mars (SAM) suite and the Chemistry and Mineralogy (CheMin) instrument. SAM used three methods to analyze gases given off from the dusty sand when it was heated in a tiny oven. One class of substances SAM checks for is organic compounds—carbon-containing chemicals that can be ingredients for life.”
The research and sampling will continue.
“We have no definitive detection of Martian organics at this point, but we will keep looking in the diverse environments of Gale Crater,” SAM Principal Investigator Paul Mahaffy said in a statement. Mahaffy works at NASA’s Goddard Space Flight Center in Greenbelt, Md.
Much of the science world has been abuzz with excitement since Curiosity’s August landing.
Curiosity successfully fired its rock-melting laser for the first time Aug. 19 as it ran through tests to be sure that the work of its science experiments will be able to proceed as planned.
The rover has been taking spectacular photographs on Mars since arriving after a 354-million-mile, eight-month voyage from Earth. | <urn:uuid:0caf8320-b0cd-40c8-ac4a-b73b9d4553f5> | CC-MAIN-2022-40 | https://www.eweek.com/cloud/nasa-aiming-for-mars-again-with-new-science-rover-in-2020/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00790.warc.gz | en | 0.92364 | 1,135 | 3.140625 | 3 |
Like many other workers in the United States, the members of the CertMag team are enjoying a day off from work in observance of Labor Day. We will return to our normal schedule and activities on Tuesday, Sept. 6. Until then, however, why not test your knowledge of U.S. history with this short quiz about well known U.S. labor leaders?
Based on the information provided, can you name each of the following historical figures? (Answers below.)
1) After leading the famous Pullman Strike as founder and organizer of the American Railway Union (ARU), and subsequently spending six months in prison, this energetic labor activist became a five-time presidential candidate for the Socialist Party of America.
2) Born March 31, 1927 in Yuma, Ariz., this towering labor hero cofounded the National Farm Workers Association (eventually merged into what is now the United Farmworkers Union), emphasized nonviolent protest, and was noted for his spiritual fasts to gain recognition of his various principles and messages.
3) Originally a teacher and dressmaker, this iron lady turned to labor activism after tragedy claimed first her family (her husband and all four of their children died of yellow fever in 1867), and then her livelihood (her dress shop was destroyed in the Great Chicago Fire of 1871).
4) The fourth U.S. Secretary of Labor, serving under presidents Franklin D. Roosevelt (during his entire presidency) and Harry Truman, this lifelong labor activist was the first woman appointed to a cabinet position under an American president and managed the Civilian Conservation Corps, Public Works Administration and Federal Works Agency.
5) Born in London in 1850 (his family moved to the United States when he was 13), this mustachioed cigar maker became a key figure in originating and promoting the ideas of labor organization and collective bargaining, helped found the Federation or Organized Trades and Labor Unions (later reorganized into what is now the AFL-CIO), and was a 32nd-degree Freemason at the time of his death in 1924.
ANSWERS: 1) Eugene V. Debs, 2) Cesar Chavez, 3) Mary Harris "Mother Jones" Jones, 4) Frances Perkins, 5) Samuel Gompers | <urn:uuid:55b45095-f633-4dc0-a1c9-d9db4fb96149> | CC-MAIN-2022-40 | https://www.certmag.com/articles/happy-labor-day-certmag | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00790.warc.gz | en | 0.962487 | 483 | 2.796875 | 3 |
What about this course?
MTA or Microsoft Technology Associate is an entry-level certification that provides the fundamentals of a certain technology based on Microsoft products. The MTA: Networking fundamentals (exam 98-366) is one of these certifications that will provide essential networking knowledge and skills to you. This certification can be your first step in networking and towards other advanced Microsoft certifications such as MCSA and MCSE. It can be considered as the Microsoft version of Network+ or Cisco ICND1/CCENT. This course will help you to prepare for this certification through building a network step by step, going from LAN, addressing, switching, topologies, media types, and networking models to WAN, routing, remote access, security, and network services. With each step, you will learn the theory and practical skills related to it. As a bonus, you will learn also using tools that will help you in your studies, such as packet tracer, GNS3, and Wireshark.
Instructor for this course
CCDP CCNPx3 (R&S ITILv3 MCSA Sec & SP) VCPx2 ( DC & NV)
This course is composed by the following modules
Introduction to Networking
Understanding Numbering Systems
Understanding Network Addressing
Introducing Packet Tracer & Working with Addressing
Understanding ARP & ICMP
Working with ARP & ICMP
Wired Network Media
Token Ring a& FDDI
Working with Hubs
Working with Switches
Understanding Subnetting & Default Gateways
Working with Subnetting
Invalid IP Addresses
Configuring Default Gateways
Understanding Proxy ARP
Configuring Hosts with a Default Gateway
Disabling Proxy ARP
Working with VLANs
Working with FTP
Working with TCP & HTTP
Working with the NETSTAT Command
Working with IPv6
Understanding Wireless Media
Configuring & Working with Wireless Devices
Configuring Static Routing
Working with Dynamic Routing - RIP
Working with Tracert & Pathping Commands
Security & Remote Access
Configuring Default Routes & Working with NAT
Configuring & Working with DNS & the NSLOOKUP Command
Configuring & Working with DHCP
Working with GNS3 & Wireshark
Other TCP/IP Commands
Networking Technology Advancements - SDN
Common Course Questions
If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below.
If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form.
Do you offer training for all student levels?
Are the training videos downloadable?
I only want to purchase access to one training course, not all of them, is this possible?
Are there any fees or penalties if I want to cancel my subscription? | <urn:uuid:e7a23f90-e873-46be-84c7-36f8a69a204e> | CC-MAIN-2022-40 | https://ine.com/learning/courses/mta-networking-fundamentals-exam-98-366 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00190.warc.gz | en | 0.84107 | 661 | 2.71875 | 3 |
What IS HIPAA Compliance
The Health Insurance Portability and Accountability Act or HIPAA was created in 1996. It is a series of regulatory standards to outline protected health information or PHI. The Department of Health and Human Services are the ones who oversee the regulations and are enforced by the Office for Civil Rights.
HIPAA sets the national protection standards. This includes such information as medical records and other patient information. All the covered entities that created, maintained and received the personal medical records are covered as well. All the security measures are closely followed at all times. Business associates that deal with the PHIs or ePHIs are covered by HIPAA compliance as well.
Here are some general rules for the HIPAA Compliance.
- HIPAA Policy Rule, which sets the national standard of the patient’s rights.
- HIPAA Security Rule, which sets the national standard for the security of maintenance, transmission and handling of electronic protected health information.
- HIPAA Breach Notification Rule, which sets the standards that businesses who breach PHI data.
- HIPAA Omnibus Rule, which was an addendum added to enhance the business associates.
What HIPAA Compliance Means
A business is considered to be HIPAA compliant if they have the technical, network and physical security measures in place. The technical security enforces access control with authorized-only access to protected medical records. The network security focuses on securing all data transmitting methods, which stops any unauthorized access to electronic medical records. The physical security enforces limited access through electronically protected health information.
This standard can be upheld by certified web development team members and certified technicians. These teams ensure that the high-security standards of the HIPAA are withheld. They comply with the standards for accessing protected private health information and understand the technical safeguard of the security rules of the HIPAA.
HIPAA Compliant IT Services
The location services of the HIPAA Compliant IT are detailed. With local businesses, the first part of HIPAA Compliant IT is server setup. There will be routine scheduled maintenance of the systems with the professionals. They will also be there for unplanned maintenance and computer repairs as well. The local businesses will always have ongoing support for their securities.
There are remote services as well. Clients can get computer repair, online devices and service it by the professional offices. They can offer a worldwide service with certified technicians. They will ensure the data is protected and grant authorization.
HIPAA Compliant Web Development Services
The HIPAA Compliant web design and development services will help in all aspects of a company website. The technicians will build the website from the ground up with a user-friendly focus. They make sure that the website’s image is modern and has a responsive mobile design. Having an improved online presence and up-to-date information allows for business growth.
Top Cybersecurity Practices
- Loss Prevention and Data Protection
- Incident Response
- Cybersecurity Policies
- Medical Device Security
- Access Management
- Asset Management
- Network Management
- Vulnerability Management
- Endpoint Protection Systems
- E-Mail Protection Systems
More Tips from Your San Marcos Managed IT Services Team
If your business or medical practice works with sensitive information and you’re looking to improve your security practices, we would recommend starting with our guide to becoming a HIPAA compliant facility. Your organization may also benefit from our general recommendations on improving network security practices. As always, don’t hesitate to consult with tekRESCUE’s San Marcos TX managed IT team to make sure your business is HIPAA compliant! | <urn:uuid:9a126b51-6517-400d-9916-858696d23f8f> | CC-MAIN-2022-40 | https://mytekrescue.com/cybersecurity-practices-for-hipaa-compliance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00190.warc.gz | en | 0.935754 | 734 | 3 | 3 |
Governance versus management—this is a conversation I have been involved in many times over the years, and not just in the IT sphere. Many organizations struggle with drawing a line between these two disciplines. In this article, I attempt to define governance and management and to show where one stops and the other starts.
Defining IT governance and management
Let’s look at both in simple terms:
- The governance function of an organization is responsible for determining strategic direction.
- The management function takes that strategic direction and translates it into actions that will bring the organization closer to achieving the strategic goals.
Governance, when applied specifically to the IT organization and its management, is no different. Those responsible for IT governance will look to the overall governance of the organization aligning with their vision, mission, and goals, and ensuring that the strategic direction being taken within IT aligns with the overall business strategy.
IT governance: Different roles, different duties
Put simply, governance is about leading and management is about doing. Sounds easy, doesn’t it? Unfortunately, the lines are not always as clear as they could be. Somewhere in the middle ground, management and governance often become confused and, fed by this confusion, major problems can grow.
Both functions will see more success when those responsible for governance and management understand their roles clearly and stay within their lanes. In Distinguishing Governance from Management, Barry S. Bader outlines seven guiding questions to determine whether something falls under governance and is thus the board’s responsibility:
- Is it big?
- Is it about the future?
- Is it core to the mission?
- Is a high-level policy decision needed to resolve a situation?
- Is a red flag flying?
- Is a watchdog watching?
- Does the CEO want and need the board’s support?
While Bader was not referring specifically to IT governance and management, the principles remain the same.
If we were living in a perfect world, managers and employees would all know and understand their duties and responsibilities and act on them responsibly. Sadly, that isn’t always what happens. That is why the governance function is ultimately accountable if they are not diligent in their oversight responsibilities.
All organizations will face known and unknown risks. New technology has exacerbated these risks, making them more prevalent and intrusive to business. Those responsible for governance must work closely with IT personnel and senior executives on overseeing risk management and establishing a healthy risk appetite for the business.
Trust for successful governance and management
The critical success factor for IT governance and management is a community of trust. When those in IT governance do not trust those in IT management to undertake initiatives that will meet the strategic goals, the governance folks are apt to step in and try to take over the management function. This is symptomatic of a deeper cultural issue that needs to be resolved.
Persistent confusion between governance and management responsibilities is counterproductive; both sides need to stay in their own swim lanes. If the board is not confident that their managers cannot deliver to the strategy they have set, then they need to invest in training or coaching to help them succeed, or they need to decide if they have the right people in the right roles.
Real world success
So, what does the governance–management relationship look like? Imagine that the IT governance group decide that the organization move all services to the cloud. With this strategic direction decided, it is up to the IT management team to determine how best to achieve this outcome.
The management group tasks groups within the IT organization with investigating options, determining which services can be moved and which ones must stay in-house, and presenting the options in a paper that will then go back to the governance group for a final decision. With all information to hand, the governance team decide on an option to move ahead with. They approve the budget and give the management team a timeframe for completion.
The governance team will now step back and allow the IT organization to undertake the necessary tasks. Management will keep the governance board informed. Unless there are factors at play that impact the ability of the solution to meet the board’s requirements, or there are cost overruns that exceed any allowed contingency, they will leave the implementation of the project to their management team.
The COBIT framework for IT governance
The group that has the responsibility for governance must govern; they must provide leadership and strategy. They must focus on the big picture. Governance is all about planning the framework for work and ensuring that it is done.
That’s why it must be separate from management, which is responsible for organising and executing the work. The governance group needs to keep away from making managerial level decisions and being a part of the day-to-day implementation of strategy.
COBIT® (Control Objectives for Information Technology) is a framework for governance and management, specifically tailored to IT. COBIT clearly separates the governance and management activities using mnemonics:
- Evaluate, Direct, and Monitor (EDM) covers the governance activities. EDM is about ensuring that stakeholder needs are evaluated to identify and agree on objectives that must be achieved, directed through prioritization and decision making, and monitored for performance and compliance against objectives
- Plan, Build, Run, and Monitor (PBRM) covers the management activities. PBRM is about ensuring that all activities undertaken and monitored are in alignment with the direction set by the governance function.
If you are involved in either the governance or management layers of an IT organization, you will find very valuable insights in the COBIT framework on the ISACA website.
For more on IT governance and management, check out these BMC Blogs:
- IT Governance: An Introduction
- Governance in the ITIL 4 Service Value System
- 5 Great IT Governance Books
- COBIT 2019 vs COBIT 5: What’s The Difference?
- Cloud Governance vs Cloud Management: What’s the Difference?
- IT Risk Management & Governance | <urn:uuid:ef0828e9-3a21-4336-9054-46b288019873> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/governance-vs-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00190.warc.gz | en | 0.950359 | 1,237 | 2.515625 | 3 |
What is Data Encryption and Why is it Important?
Data encryption means putting computer data into code so that it can only be read by somebody with the correct decryption “key”. Data encryption usually works as an extra level of defense that means even if somebody accesses data without authorization (for example after a physical breach or interception), they cannot use it. Many forms of data encryption bring extra benefits such as proving a message has not been tampered with.
In extremely simplified terms, encryption turns readable data into a string of characters that look meaningless to a human reader. The encryption or decryption key is itself a string of characters which a computer uses to perform the transformation and/or reverse the process.
Older forms of data encryption use a symmetric approach that means the same key for encoding and decoding the data. This can be simpler but increases the risk that somebody who gets hold of the key can now decode all messages encoded with it.
Many newer forms of data encryption use asymmetric cryptography. This usually involves encrypting the data with two keys, one provided by the sender and one by the recipient. This combines practicality and security. The sender can use the same encryption key for every message, but only the intended recipient can decrypt it.
Older encryption methods used keys that proved too short and simple to be secure in the long term. This was because hackers were able to use a brute force attack, which involve a computer making multiple attempts to decrypt data, simply trying out every possible key. This became more viable as computing power increased.
Modern encryption methods use longer and more complicated keys. These provide exponentially greater security, meaning adding only a few characters to a key makes it dramatically harder to beat with a brute force attack. For example, adding a single letter to a key means creating 26 times the number of possible keys. Some encryption methods use keys as long as 256 characters. In many cases, even the fastest computers available today would have to try different combinations for trillions of years for hackers to be confident of success.
Data encryption works in a wide range of computer settings. For example, a secure website (whose address begins HTTPS:// rather than HTTP://) means the data is encrypted on its journey between your computer and the website itself. Most modern email services use encryption.
You can also use encryption to protect the files on your computer or portable device. This reduces the risk of somebody who steals or gains physical access to your machines being able to access the files themselves.
Unfortunately, data encryption can also be used for harm. Ransomware attacks are where a criminal uses malware to infect a computer or network and encrypt the data on it. The criminals then demand a “ransom” payment to restore access. Some victims, particularly those without adequate data backups, decide that paying up is the best option.
Contact CPI Solutions today to learn more about how data encryption can protect your confidential information without disrupting your workflow. | <urn:uuid:d9d0e7fa-85d2-4f16-bba8-f430b1758fa1> | CC-MAIN-2022-40 | https://www.cpisolutions.com/blog/what-is-data-encryption-and-why-is-it-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00190.warc.gz | en | 0.932441 | 598 | 3.984375 | 4 |
Business Continuity Plan
What is a business continuity plan?
A business continuity plan is a framework that details what will happen in the event of a disruption to business operations. It is part of an emergency management policy that connects the emergency response phase to the recovery phase.
Start with an evaluation
Creating a business continuity plan requires a thorough evaluation of the impacts disruption may have to every aspect of the business, from people to processes to supply chains. It provides a way to respond to and mitigate potential emergencies. Threats to continuous operations include such events as natural disasters, supply chain failures, cyber-attacks, the loss of a key employee, and, especially, pandemics.
Deal with the unexpected
Fundamentally, a good business continuity plan helps an organization face the unexpected. In most scenarios, solutions involve maintaining system redundancy, failover, and workplace recovery, and IT infrastructure is central to them all. From offsite backup to cloud partner platforms to remote access, every aspect must be carefully evaluated and strengthened.
Test, maintain, and update
And not only should an organization have a business continuity plan in place, but it needs to test, maintain, and update the plan. Such maintenance requires time and dedicated resources, which are necessary expenses when it means staying in business. Analysts say that continuity planning is an active and recurrent process rather than a one-time project that’s forgotten once completed. Without that effort, the plan may actually end up being the weakest link in recovery efforts.
Modern Organizations Must Plan for Disruptions
To remain successful, resilient companies prepare to handle disruptive events and keep operations running with business continuity planning. With a formal business continuity plan, organizations ensure that they can continue to function under any circumstances. Being prepared in advance can mean the difference between being able to restart operations and coming to a standstill.
How business continuity plans adapt and change
Business continuity traditionally addressed operational recovery, requiring a formal potential risk assessment and a proactive buildout of solutions for each instance. But since COVID-19 struck, corporations have realized that they need a more elastic approach to preparing for future events.
Instead of simply ensuring that networks continue running and that people can access them when emergencies happen, the pandemic required setting up a stable network with the capacity for significantly higher numbers of people to log in remotely. And firms needed not only the underlying infrastructure to support everything, but also effective communication and collaboration tools.
In addition, a higher frequency of cyberattacks and ransomware threats has challenged IT departments to use the absolute highest level of cybersecurity across every access point on the network. That’s no small feat as more people work remotely and as cyberattacks continue to evolve. To meet this challenge, IT departments must also take into account the learning curve employees experience throughout cybersecurity’s evolution.
In the end, all these steps keep an organization flexible and agile. Rather than merely observing and reacting to disruptions, a mature business continuity plan ensures that technology, processes, people, and operations are all aligned so the organization can quickly adjust to emerging crises and adapt as the situation changes.
Why is a business continuity plan important?
The real question is what does a firm risk by going without a business continuity plan? Short answer: a lot and sometimes everything. Many have discovered after disastrous events that a failure to plan can mean the failure of the entire business itself. Going without can lead to a “game over” scenario. Lost revenue, lost customers, lost profit—the list is dismal.
But while most firms recognize the need for a business continuity plan, they often don’t consider what else is on the line. Without a business continuity plan, firms not only face the risk of being offline and losing precious revenue, they also risk the loss of corporate reputation and market leadership. It takes years just to make people aware of any given organization. The prospect of losing a hard-won reputation and a position on the leading edge of an industry is almost unthinkable, and its cost incalculable.
The prudent step to take is one best started yesterday. And while not all risk can be completely averted, a solid business continuity plan can ensure that the lights stay on and customers continue to be served.
How does technology help ensure business continuity?
Organizations are investing in digital changes for more than just business continuity. They are putting money into solutions that accelerate their growth, respond more quickly to demand, and communicate more effectively with their customers. Read below for a few ways digital solutions have helped with business continuity and transformation.
Healthcare: Mission critical connectivity and rapid response times are crucial in healthcare and emergency settings. With as-a-service models, hospitals and first responders are ensuring fast, continuous access for critical applications through “rapid-response” healthcare. Other solutions being used include cloud-native platforms that allow remote physicians secure access to patient record systems while meeting privacy, security, and other regulatory compliance requirements.
Small Business: As small businesses often face challenges to staying afloat, cost and efficiency remain paramount. Many have found simple, secure server virtualization solutions. Virtual desktop infrastructure solutions allow small business to navigate the demands of a secure and productive remote workforce. The best solutions deliver secure, efficient access to applications and data and support a wide range of user requirements.
Call Centers and Schools: As call centers support remote working and many schools continue to offer virtual education and distance learning, high-performance reliability to securely access records and resources is critical. Both are finding solutions that allow for anywhere education and maintain call center responsiveness with IAP-VPN or Remote Access Points (RAPs).
The ultimate outcome of business continuity assessment and follow-up is a shift to a new normal driving innovation and breakthrough, and sometimes even leading to new business models.
Business continuity secured with HPE solutions
Hewlett Packard Enterprise helps support operations and business productivity and planning and execution services to speed business results. Take advantage of HPE’s decades-long experience with infrastructure and explore their solutions below:
With HPE servers and software, you can monitor your local workplace and remote workplaces at the same time. Pre-configured HPE ProLiant ML and DL servers and software are easy to deploy. HPE Integrated Lights Out (iLO) server management software enables local and remote monitoring and management. And HPE SMB Setup Software, part of intelligent provisioning, provides a simple, guided process for installation that takes less time and reduces the chance of errors.
As-a-service models offer the flexibility of cloud with the control, security, and reliability found in on-premises data centers. By paying for IT resources and capacity as you use them and when you need them, you can reduce or even eliminate IT capital expenses and operations costs. In addition, with as-a-service models, IT resources can be expanded quickly based on business needs, and IT operations are simplified. HPE offers a market-leading IT-as-a-service offering that brings the cloud experience to your on-premises infrastructure with HPE GreenLake.
Small businesses can find several options to deploy virtualized desk interfaces and implement centralized storage and security, data protection, 24x7 availability, and optional archiving and disaster recovery storage. At HPE, our server virtualization solutions are built on HPE ProLiant servers with scalable and optimized processors.
Protect against attacks and quickly recover from downtime with built-in security features from HPE ProLiant Gen10 that reduce security risks and disruptions. Add peace of mind with backup and archiving storage options and protect data at rest with HPE Secure Encryption.
With solutions like these, you can be prepared for the threat of a shutdown with secure and reliable access to data with infrastructure and software that take into account the complexity and diversity of the infrastructure and variability of demand. HPE solutions allow you to assess evolving developments and find technology that can help with operational process sufficiency. | <urn:uuid:ff914ed8-6726-46e7-961a-76a03fed9d75> | CC-MAIN-2022-40 | https://www.hpe.com/fr/fr/what-is/business-continuity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00190.warc.gz | en | 0.938066 | 1,620 | 2.6875 | 3 |
February 12, 2019 | Written by: IBM Research Editorial Staff
Share this post:
Today, an artificial intelligence (AI) system engaged in a live, public debate with a human debate champion at Think 2019 in San Francisco (watch replay). At an event sponsored by IBM Research and Intelligence Squared U.S., the champion debater and IBM’s AI system, Project Debater, began by preparing arguments for and against the resolution, “We should subsidize preschool.” Both sides then delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary.
Harish Natarajan, who holds the world record for most debate competition victories, took on IBM Project Debater, the first AI system that can debate humans on complex topics, at a live debate at IBM Think 2019. While the live audience named Harish the winner of the debate, a majority said Project Debater better enriched their knowledge, underscoring the AI technology’s potential to help human’s make better and more informed decisions. (credit: Visually Attractive for IBM)
Project Debater made an opening argument that supported the resolution, making use of facts on the upsides of subsidizing preschool, citing studies and quoting historical figures. Her view was grounded in the premise that subsidizing preschool isn’t just a matter of finance — it’s a moral and political imperative to support the most vulnerable members of society. She cited research showing that investment in preschool results in more successful lives, including better income and health as well as a decreased likelihood to be involved in crime.
The human debater, Harish Natarajan — who holds the record for most competition victories — opposed the resolution, arguing that a preschool subsidy doesn’t effectively address the root causes of poverty and is simply “a politically motivated giveaway” to the middle class. While acknowledging that poverty is a terrible condition that must be addressed by government and societal resources, he said other programs were more effective. A subsidy, he argued, would simply be a giveaway to people who likely already have their children enrolled in preschool.
Both sides had only 15 minutes to prepare for the debate, affording neither the chance to train on the topic. In other words, an AI system engaged with an expert human debater, listened to his argument and responded convincingly with its own, unscripted reasoning to persuade the audience to consider its position on a controversial topic. This represents another important step in the long-term journey to teach AI to master human language.
Both Project Debater and Natarajan were able to offer valuable and interesting discussions, but differed in their approach and style. The AI system pulled in data that supported its view, while Natarajan used his significant skills to reframe the debate about where government dollars could be best used to ensure societal equality.
The winner of the event was determined by the debater’s ability to convince the audience of the persuasiveness of the arguments. Results were tabulated via a real-time online poll. Before the debate, 79 percent of the audience agreed that preschools should be subsidized, while 13 percent disagreed (eight percent were undecided). After the debate, 62 percent of poll participants agreed that preschools should be subsidized, while 30 percent disagreed, meaning Natarajan was declared the winner. Interestingly, 58 percent said that Project Debater better enriched their knowledge about the topic at hand, compared to Harish’s 20 percent.
At an event moderated by Intelligence Squared U.S.’s John Donvan (left), AI system IBM Project Debater (center) and world champion debater, Harish Natarajan (right), debated the resolution “We Should Subsidize Preschool.” Both sides delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. The topic of the debate was shared with both Harish and Project Debater some 15 more before the event started, affording neither the chance to train on it. (credit: Visually Attractive for IBM)
Moderating the evening’s debate was John Donvan, four-time Emmy Award winner and host of the Intelligence Squared U.S. debate series. Following the debate, Donovan engaged Natarajan and two of the primary researchers behind Project Debater, Dr. Ranit Aharonov and Dr. Noam Slonim, in an onstage discussion. All agreed it was a fascinating and historically significant evening. Aharanov explained that the potential of Project Debater lies in its ability to “understand both sides of a problem and present all the pros and cons so you have a wider view of the topic and then can make a better decision.”
Slonim later added, “Ultimately, what we saw was that the interaction of man and machine could be enriching for both. It’s not a question of one being better than the other, but about AI and humans working together.”
In addition to clients, business partners, press, analysts and social influencers, students from local debate teams including Dougherty Valley and the Bay Area Urban Debate League attended the debate. Many students were excited by how well Project Debater performed. “I was really very impressed by how well the IBM machine could create responses based on the arguments that Harish presented,” said Rishi Balakrishnan, a student at Bellarmine College Preparatory School in San Jose, CA.
Project Debater’s first live public debate took place in June before a small, select group. At Think, IBM endeavored to share the science and spectacle of a live debate with a large in-person audience, and thousands more watching via livestream. The goal of the debate, said IBM Research director Dario Gil, was not to discover who is right or which side won, but to “master the complex and rich world of human language.” On Monday night, they came one step closer.
Join the debate
If you’re excited about what you saw in the live debate and want to experience a debate for yourself, check out Project Debater – Speech by Crowd, an experimental cloud-based AI platform for crowdsourcing decision support. The technology uses the core AI behind Project Debater to collect free-text arguments from large audiences on debatable topics and automatically construct persuasive viewpoints to support or contest the topic.
We’ll feature Project Debater – Speech by Crowd all week at Think, analyzing the pros and cons of the topic, ‘Flu vaccination should be mandatory’ as viewed by the crowd. Whether or not you’re attending Think, you can participate.
Visit the Project Debater – Speech by Crowd experience at Think (booth 429, Moscone South) or online every day to weigh in and make your voice heard. Contribute your most thoughtful and inspiring arguments, then check back throughout the week to see the pro and con speeches Project Debater constructs – and whether your argument is included.
You can watch a replay of the live debate at Think here. To add your arguments to Speech by Crowd, visit the online experience. | <urn:uuid:fdf9d32c-7f65-405b-a2f5-8f34032cc222> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/research/2019/02/ai-debate-recap-think-2019/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00190.warc.gz | en | 0.957348 | 1,492 | 2.59375 | 3 |
Because pedestrians at an intersection tend to trust traffic lights more than self-driving cars, cities might be safer if cars communicated their intentions to traffic lights rather.
Automated vehicles don’t have human operators to communicate their driving intentions to pedestrians at intersections. My team’s research on pedestrians’ perceptions of safety shows their trust of traffic lights tends to override their fear of self-driving cars. This suggests one way to help pedestrians trust and safely interact with autonomous vehicles may be to link the cars’ driving behavior to traffic lights.
In a recent study by my team at the University of Michigan, we focused on communication via a vehicle’s driving behavior to study how people might react to self-driving cars in different situations. We set up a virtual-reality simulator that let people experience street intersections and make choices about whether to cross the street. In different simulations, self-driving cars acted either more or less like an aggressive driver. In some cases there was a traffic light controlling the intersection.
In the more aggressive mode, the car would stop abruptly at the last possible second to let the pedestrian cross. In the less aggressive mode, it would begin braking earlier, indicating to pedestrians that it would stop for them. Aggressive driving reduced pedestrians’ trust in the autonomous vehicle and made them less likely to cross the street.
However, this was true only when there was no traffic light. When there was a light, pedestrians focused on the traffic light and usually crossed the street regardless whether the car was driving aggressively. This indicates that pedestrians’ trust of traffic lights outweighs any concerns about how self-driving cars behave.
Why it matters
Introducing autonomous vehicles might be one way to make roads more safe. Drivers and pedestrians often use nonverbal communication to negotiate safe passage at crosswalks, though, and cars without drivers can’t communicate in the same way. This could in turn make pedestrians and other road users less safe, especially since autonomous vehicles aren’t yet designed to communicate with systems that make streets safer, such as traffic lights.
Other research being done in the field
Some researchers have tried to find ways for self-driving cars to communicate with pedestrians. They have tried to use parts that cars already have, such as headlights, or add new ones, such as LED signs on the vehicle.
However, unless every car does it the same way, this strategy won’t work. For example, unless automakers agreed on how headlights should communicate certain messages or the government set rules, it would be impossible to make sure pedestrians understood the message. The same holds for new technology like LED message boards on cars. There would need to be a standard set of messages all pedestrians could understand without learning multiple systems.
Even if the vehicles communicated in the same way, several cars approaching an intersection and making independent decisions about stopping could cause confusion. Imagine three to five autonomous vehicles approaching a crosswalk, each displaying its own message. The pedestrian would need to read each of these messages, on moving cars, before deciding whether to cross.
Our results suggest a better approach would be to have the car communicate directly with the traffic signal, for two reasons.
First, pedestrians already look to and understand current traffic lights.
Second, a car can tell what a traffic light is doing much sooner by checking in over a wireless network than by waiting until its camera can see the light.
This technology is still being developed, and scholars at Michigan’s Mcity mobility research center and elsewhere are studying problems like how to send and prioritize messages between cars and signals. It might effectively put self-driving cars under traffic lights’ control, with ways to adapt to current conditions. For example, a traffic light might tell approaching cars that it was about to turn red, giving them more time to stop. On a slippery road, a car might ask the light to stay green a few seconds longer so an abrupt stop isn’t necessary.
To make this real, engineers and policymakers would need to work together on developing technologies and setting rules. Each would have to better understand what the other does. At the same time, they would need to understand that not every solution works in every region or society. For example, the best way for traffic lights and self-driving cars to communicate in Detroit might not work in Mumbai, where roads and driving practices are far different.
This article was first posted on The Conversation. | <urn:uuid:88260bb3-b6b4-4498-880a-c459385fa764> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2020/04/linking-self-driving-cars-to-traffic-signals-might-help-pedestrians-give-them-the-green-light/290313/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00190.warc.gz | en | 0.962237 | 908 | 3.40625 | 3 |
Back in the late 1950s, the first computer-based speech-synthesis systems was originated. Noriko Umeda Et Al. developed the first general English text-to-speech system in 1968, at the Electro- technical Laboratory in Japan. Since then Text-to-Speech technology took online written content to a new dimension of accessibility. With a click of a button or a touch of a finger, TTS can take words on a computer or any other digital device and convert them into sound. The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. The most important qualities of a speech synthesis system are naturalness and intelligibility. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood that allows people with visual impairments or reading disabilities to listen to written words on any device.
On the other hand, According to researches, the Arabic language is one of the most complex languages in the world. It comes the 2nd after the Mandarin Language. So based on that we decided to face that challenge bravely and introduce you to our Nun Arabic TTS engine
Our Nun Arabic TTS engine is developed by 15+ professional Arabic linguists. Nun TTS stands out of all other TTS engines by how natural, familiar and human alike it sounds, how accurate and fluent it is at all the vowels and grammar complexity of our authentic Arabic language.
Here is a sample of our unique voice
Two of the most unique features of Nun Arabic TTS where our linguists invested time and effort to develop are Text Normalization and Auto Diacritization by assigning phonetic transcriptions to each word, and divides and marks the text into prosodic units, like clauses, phrases, and sentences.
Natural Language Processing and Machine learning are also two strong features Nun TTS has. They focus on pronunciation of complex Arabic words, phrases, names, date formats, time, currencies and abbreviations just like a human native Arabic speaker.
Here is a sample of Arabic Names pronunciation
Nun Arabic TTS is backed by a powerful AI engine for a real time live experience where it can read not only static but dynamic data as well.
When it comes to Arabic dialects, Nun TTS is originally designed to read in Modern Standard Arabic but we also backed it with a big library so you can ask for your own local dialect human a-like Text-to-Speech voice. | <urn:uuid:b6813c7e-8fcc-4380-b2d4-d5233fc0ab83> | CC-MAIN-2022-40 | https://www.istnetworks.com/blog/the-most-human-alike-text-to-speech-engine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00190.warc.gz | en | 0.939505 | 513 | 2.640625 | 3 |
Remote code execution (RCE) refers to a class of cyberattacks in which attackers remotely execute commands to place malware or other malicious code on your computer or network. In an RCE attack, there is no need for user input from you. A remote code execution vulnerability can compromise a user’s sensitive data without the hackers needing to gain physical access to your network.
For this reason, RCE vulnerabilities are almost always considered critical, and finding and patching them should be among your top priorities. Network security has come a long way from the worms of the 1980s, and RCE attacks can be remarkably complex and difficult to spot. What does an RCE attack look like in the 21st century, and what can you do to protect your company?
What Is Remote Code Execution (RCE)?
The umbrella of remote code execution is incredibly broad, and it includes a huge variety of attacks and malicious code. Most commonly, attackers exploit zero-day software vulnerabilities to gain deeper access to a machine, network or web application.
Arbitrary Code Execution and RCE
In arbitrary code execution (ACE), a hacker targets a specific machine or network with malicious code. All RCE attacks are a form of arbitrary code execution, but not all arbitrary code execution is remote. Some ACE attacks are performed directly on the impacted computer, either through physically gaining access to the device or getting the user to download malware. RCE attacks, on the other hand, are performed remotely.
How an RCE Attack Works
Because remote code execution is such a broad term, there’s no single way you can expect an RCE attack to act. In general, RCE attacks have three phases:
- Hackers identify a vulnerability in a network’s hardware or software
- In exploiting this vulnerability, they remotely place malicious code or malware on a device
- Once the hackers have access to your network, they compromise user data or use your network for nefarious purposes.
The Goal of an RCE Attack
Once attackers have access to your network through remote code execution, the possibilities for what they can do are nearly limitless. RCE attacks have been used to perform everything from crypto mining to nation-level espionage. This is why RCE prevention is such a high priority in the world of cybersecurity.
Impacts of Remote Code Execution Vulnerability
Just as you wouldn’t give the key to your home to a stranger, don’t allow bad actors access to your company’s network or hardware. Because remote code execution is pervasive, preventing RCE isn’t just the purview of the IT department. Network security is everyone’s responsibility, from the C-suite to the janitors.
Risks of Neglecting RCE Vulnerabilities
Neglecting RCE vulnerabilities comes at more than just the obvious financial cost. If you fall victim to an RCE attack, you risk:
- Eroding consumer trust in your brand
- Paying hefty fines and fees to cover identity protection for compromised user data
- Dealing with your network slowing to a crawl as hackers use it for their own purposes
Types of Damage an RCE Attack Causes
Because remote code execution covers such a wide range of attacks, it’s safe to say that RCE can cause nearly any level of damage to your network. Some famous past examples of remote code execution include:
- The “WannaCry” ransomware that crashed networks across the world, from megacorporations to hospitals, in 2017.
- The Equifax breach, also in 2017, exposed the financial data of nearly 150 million consumers. It was the result of not only one, but a series of RCE vulnerabilities.
- In February 2016, hackers robbed the Bangladesh Bank of nearly $1 billion using an RCE attack on the SWIFT banking network.
Minimizing RCE Vulnerability
While no system will ever be 100% perfect, there are ways to minimize your vulnerability to remote code execution. First, keep your software updated. These security updates protect your software — from your operating system to your word processor — against emerging threats. Deploying technical solutions such as the CrowdStrike Falcon® platform is also an excellent move. Finally, always sanitize user input anywhere you allow your users to insert data.
How to Identify Code Execution Vulnerabilities
The trouble with zero-day exploits is that patching vulnerabilities takes time. This is why it’s critical to be proactive when it comes to sealing the vulnerabilities you know about, and finding the ones you don’t.
How Penetration Testing Can Help You Identify RCE Vulnerabilities
Penetration testing (or pen testing) simulates the actions of hackers, helping to discover your company’s weaknesses before hackers do. This is one of the best things you can do to protect against RCE, as long as you act on the results. This action may include added protection against malware, training to prevent employees from falling victim to phishing attacks and patching any potential exploits you find.
How Threat Modeling Can Help You Identify RCE Vulnerabilities
One of the best ways to get ahead of hackers is to think like a hacker. In threat modeling, you look at what could go wrong, ranking potential threats and proactively creating countermeasures for them. The more people on your side are searching for vulnerabilities, the less likely an RCE attack will be on your network.
Other Ways of Identifying RCE Vulnerabilities
Don’t be afraid to benefit from the work of others. The Log4Shell threats of 2021 were resolved not by any single person, but were identified and patched by teams across the world. Cloud security solutions may prevent the exploitation of some RCE vulnerabilities, but be careful to keep them up-to-date, as hackers can also exploit remote code exploitation vulnerabilities in these programs.
How to Prevent Remote Code Execution Attacks
In addition to penetration testing and threat modeling (described above), there are a lot of ways to prevent remote code execution vulnerability from becoming a problem for your company.
Precautions and Best Practices for Preventing RCE Attacks
The single best thing your company can do to prevent RCE attacks on a technical level is to keep everything on your network updated. This means updating not only your software but any web applications you use as well. Also consider performing a regular vulnerability analysis on your network. If you think performing a penetration test is expensive, you haven’t experienced how much a data breach can cost.
Security is your whole company’s responsibility, so on a more human level, it’s also important to keep your employees trained to spot fraud, phishing and scams. There’s a balance you need to strike: while you want to empower your employees to prevent attacks, you also want to limit their access to sensitive data they don’t need. This is called the principle of least privilege, and it mitigates the negative impact an RCE attack can have on your company or network.
What to Avoid to Prevent RCE Attacks
There are a few things to avoid when it comes to remote code execution attacks.
- Avoid allowing your users (or anyone) to insert code anywhere in your web application. Always assume that people are actively trying to attack you with their user input, because somebody out there is.
- Don’t use certain software just because it’s convenient or popular. Make sure you select software based on your company’s security needs as well.
- Don’t neglect buffer overflow protection! This method of remote code execution has been around for decades, and it’s not going anywhere.
Protect Yourself from RCE Attacks
The world of remote code execution may be massive, but when it comes to protecting your company and yourself, you don’t have to go it alone. From next-generation antivirus software to a complete endpoint security solution, CrowdStrike offers a variety of products that combine high-end technology with a human touch. With CrowdStrike Falcon Spotlight™, you can get real-time vulnerability assessments across all platforms, with no additional hardware required. | <urn:uuid:b6d0de8f-6dc4-4da2-949c-e3b3929e4a1c> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/remote-code-execution-rce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00190.warc.gz | en | 0.927101 | 1,681 | 3.203125 | 3 |
With Windows® 11, the hardware and software work together to protect your business from the end user all the way to the cloud. Discover how Windows can keep your data and teams protected.
Accessibility note: The infographic is transcribed below the graphic.
Approximately 80% of security decision-makers say that software alone is not enough to protect from emerging threats.1 Windows 11 is a Zero Trust-ready OS to protect data and access anywhere.
The first step in Windows 11 Zero Trust protection is to verify explicitly, which means to authenticate and authorize based on all available data points, including user identity, location, device health, service, data classification and anomalies.
The second step is least-privileged access, which controls user access with just-in-time and just-enough-access, risk-based adaptive policies and data protection to secure both data and productivity.
Finally, the assume breach step minimizes the blast radius and segments access. It also allows you to verify end-to-end encryption and use analytics to gain visibility to improve threat detection and defenses.
Outdated hardware leaves organizations vulnerable to attacks and security decision-makers say believe that modern hardware protects against future attacks. Improving upon the innovations of Windows 10, Windows 11 provides additional security capabilities to meet today’s evolving security landscape and enable more hybrid work and productivity. Windows 11 is designed to build a stronger foundation that’s more resilient to cyberattacks.
Keep sensitive data behind additional security barriers separated from the operating system with Windows 11. This information, including encryption keys and user credentials, is protected from unauthorized access and tampering.
Hardware and software work together in Windows 11 to protect your entire organization with virtualization-based security (VBS) and Secure Boot built-in and enabled by default on new CPUs. VBS uses hardware virtualization features to create and isolate a secure region of memory from the operating system. This environment hosts multiple security solutions, greatly increasing protection from vulnerabilities and preventing the use of malicious exploits.
Windows 11 has multiple layers of application security to guard critical data and code integrity. Application isolation and controls, code integrity, privacy controls and least-privilege principles enable developers to build-in security and privacy from the ground up.
To protect privacy, Windows 11 also provide more controls over which apps and features can collect and use data, including device location or access resources like camera and microphone.
Passwords are a prime target for cybercriminals. However, Windows 11 is changing the standard with passwordless protection. After a secure authentication process, credentials are protected behind layers of hardware and software security. This gives users secure, passwordless access to their apps and cloud services.
End users can remove the password from their Microsoft account and use the Microsoft Authenticator App, Windows Hello, smart card or verification code sent to their phone or email. IT admins and consumers can set up Windows 11 devices as passwordless out-of-the-box.
Windows 11 security extends zero-trust from the end user to the cloud, enabling policies, controls, procedures and technologies that work together to protect your devices, data, applications and identities from anywhere.
Microsoft offers comprehensive cloud services for identity, storage and access management in addition to the tools to attest that any Windows device connecting to your network is trustworthy.
The acceleration of digital transformation and evolution of remote and hybrid work brings new opportunities to organizations and their teams. Now more than ever, employees and businesses need the right tools and security to ensure business continuity. Insight and Windows 11 make it easy to stay secure with the right devices and software. Talk to an Insight specialist today to learn more.
¹Microsoft Security Signals, September 2021. | <urn:uuid:92caab52-f2c9-415e-a92f-7dc3819eac4d> | CC-MAIN-2022-40 | https://prod-b2b.insight.com/en_HK/content-and-resources/2022/the-security-guide-to-windows-11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00190.warc.gz | en | 0.904671 | 751 | 2.765625 | 3 |
Cyber security should be a major concern for any business. One of the most common attacks that is found today is ransomware. Ransomware is a type of attach that involves the lock down of files for a ransom. Once the ransom is paid, then the decryption key is provided. The problem is that these attacks is that it is not recommended that you provide any payment for your files and the payment is generally too expensive for companies to manage anyway. Instead, you should work on trying to increase your cyber security and your defenses against all kinds of cyber attacks.
Back Up Your Files
The best way to prevent these types of attacks from really affecting you is by backing up your data. That way, even if you do fall victim to a ransomware attack, you will still have all of your files available to you in a different and secure location. You will have your second copy and can ignore the attackers. You will not have to find the money to pay for the decryption key. You can simply go on about business as usual. You also need to schedule regular backups. If you have your files backed up but do not regularly back them up, then it will be useless to you. Depending on how often you files change, you may want to back them up weekly, daily, or hourly. Find something that works for your business and schedule regular backups to happen automatically.
Educate Your Employees
Phishing scams are emails that have infecting links that will allow attackers into your system. While you can put up defenses to keep these kinds of attacks at bay, you also need to take the precaution to educate your employees on what to look for so they can avoid failing victim to these attacks. Tell them what to look for and also educate them when new types of attacks are discovered.
Keep Software and Operating Systems Updated
While it may seem like it is a waste of time to go through and update your systems when prompted, it is always smart to do that as soon as possible. When systems are updated, it is usually because there is a patch that can help protect your business by fixing the vulnerability in the system. You do not want to leave these vulnerabilities open so encourage all employees to make these updates as soon as possible.
Secure Personal Devices
If you allow your employees to use personal devices for work, you need to take the time to develop a policy and a way to better protect all data that is used on these devices. You should only allow certain data to be reached and also encrypt the data. You should make sure all devices have a password set on them. You should also try to separate personal and corporate data as much as possible.
For more information about how to protect your business, be sure to contact Interplay today at (206) 329-6600 or email@example.com. | <urn:uuid:f675a1cf-fe7c-4980-90fa-f9f7a217afd3> | CC-MAIN-2022-40 | https://www.interplayit.com/blog/the-best-ways-to-increase-cyber-security-in-your-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00190.warc.gz | en | 0.96708 | 570 | 2.53125 | 3 |
Asymmetric encryption is an encryption technique in which two different yet mathematically linked keys are used to encrypt and decrypt data exchanged between two communicating systems. The two keys are a public key and a private key. The public key is openly available to everyone. The corresponding private key, on the other hand, can only be accessed by the authorized recipient or system.
The major difference between symmetric and asymmetric encryption is that the symmetric technique uses the same key to encrypt and decrypt data, whereas the asymmetric technique uses two unique keys to encrypt and decrypt data.
In the example of a browser-server communication, when a browser hits the web server requesting for the website, the server responds to the request by presenting its SSL/TLS certificate embedded with its public key. The browser checks the certificate to verify if the website is legitimate. If there are no issues, generates a pre-master key, encrypts it using the public key of the server and sends it back. On receiving the pre-master key, the server uses the private key linked to the public key to decrypt it. Since the private key is known only to the server and no other unrelated key can decrypt the pre-master key, it is safely transmitted without any unauthorized parties accessing it. Once the client and server, both have the pre-master keys, they individually generate a shared secret called the session key. To verify that both of them have generated the same session key, they send each other messages encrypting with the session key. If they are able to decrypt the messages, then the connection is established and the communication switches to symmetric encryption.
When a user visits a website/web page, the browser initiates an SSL/TLS handshake with the web server that hosts the website. The server sends its SSL/TLS certificate to the client on receiving its request to connect. The client verifies its authenticity. If there are no issues, the client generates a unique “session key,” encrypts it using the server’s public key (that’s found on the certificate), and sends it back to the server. The server decrypts this session key with its private key (known only to it). Once the server and the client both have the session key, the handshake switches to symmetric encryption, where the session key is used to encrypt and decrypt all messages exchanged in that particular session.
The most commonly used and popular algorithms for asymmetric encryption are:
The Diffie-Hellman algorithm is a key exchange mechanism that enables two parties who have never met each other to communicate securely over the internet by agreeing upon a shared secret key without actually transmitting it. As the shared secret is derived from complex modular arithmetic calculations performed separately on both ends, a potential hacker will never be able to decode the shared secret, making it a highly secure encryption technique for internet communication.
RSA encryption uses a product of two large prime numbers to generate a public key and a private key. These prime numbers are discarded after the encryption-decryption process. Factoring out such incredibly large prime numbers and their products to derive the private key pair requires immense processing power, making RSA encryption extremely challenging to break.
Digital Signature Algorithm (DSA) is used in digital signatures that serve as proof of the sender’s authenticity and message integrity. A sender digitally signs a message using the private key, and the recipient verifies the identity of the sender and the origin of the document using the sender’s corresponding public key.
Apart from these, the other asymmetric algorithms used are Elliptical Curve Cryptography (ECC) and EI Gamal.
The biggest advantage of asymmetric encryption over its symmetric counterpart is that it uses two different keys for encryption and decryption. Using two different keys eliminates the need for key sharing between communicating parties. While the public key is available for everyone, the private key is accessed only by a single authorized recipient (or system) and is never transmitted or revealed, which greatly reduces the chances of data compromise due to key theft and also guarantees that the message cannot be altered during transit. As key sharing or distribution is not necessary, asymmetric encryption proves highly effective when a large number of endpoints are involved.
Also, asymmetric encryption uses keys with longer key lengths (up to 4096 bits). Longer key lengths amount to stronger encryption and better data security.
One of the challenges with asymmetric encryption is its slow speed and resource consumption. As there are two separate keys with longer key lengths involved, the computing power required to process encryption and decryption is much higher when compared to the symmetric technique. Complex computing increases server overhead and eventually results in slow connections. This is why asymmetric encryption is not applied when a large quantity of data is involved. For example, in the case of the SSL/TLS handshake, asymmetric encryption is only used initially during server authentication. Once the connection is established between the web server and the user’s browser (or client), the handshake immediately switches to symmetric encryption for bulk data transmission. | <urn:uuid:f0def5f8-6575-432a-bb5d-178ae9d28c70> | CC-MAIN-2022-40 | https://www.appviewx.com/education-center/asymmetric-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00391.warc.gz | en | 0.915988 | 1,030 | 3.96875 | 4 |
What Is the Future of Augmented Reality in EdTech?
In simple terms, EdTech is the practice of leveraging IT tools and technology into the classroom to create an inclusive, more engaging, and personalized learning experience. As per industry research, EdTech is expected to reach $680.1 billion by 2027, growing annually at the rate of 17.9%.
Many EdTech companies are digitizing college and competitive learning. These companies have realized how AR acts as a value-added service and improves engagement. By 2023, Augmented Reality is expected to surpass $5.3 billion opening doors to several opportunities for educational institutions and businesses.
How AR works in education?
AR improves the real-world environment with text, sound effects, graphics, and multimedia. Simply put, it improves our immediate surroundings by layering digital content on top of the graphic representation of the real world. AR includes 25% digital reality and 75% existing reality. It means, AR does not replace your environment with the virtual but it integrates virtual objects into the real world.
Owing to the increased attention span among students and the ability to deliver varied information engagingly, teachers are vouching for the use of AR in classrooms. Going ahead, AR stands to benefit students by inducing a problem-solving attitude, delivering learning gains, providing motivation, improving cognitive skills, enhancing interaction, and enabling collaboration. This results in a positive attitude among students making the investment in AR worth it.
Let’s consider a situation wherein a history lesson, students are being taken through a module on Egypt to understand how the pyramids were built. Digital projections and visualization using AR can have a lasting impact on students that will not only help them learn these concepts but will also help them retain them and avail answers related to them, in their exams. Simply put, AR technology can provide limitless possibilities for students and teachers alike.
Benefits of AR in education
Augmented Reality offers several perks in the education sector.
1. Easy access to learning materials
AR helps replace textbooks, physical forms, and printed brochures thereby reducing the cost of learning materials. Augmented Reality makes it easy for everyone to access the material from anywhere.
2. An immersive and effective learning system
Augmented Reality helps students gain knowledge through compelling visuals and immersive content. Additionally, speech technology provides students comprehensive details about a topic in a voice format thereby engaging them. Simply put, AR in education targets a major information-gathering sense in humans.
3. Encourage students and spruce up their interest
AR makes learning interesting, effortless and improves collaboration and capabilities. Additionally, it ensures the classes are less tiring by providing opportunities to implement hands-on learning approaches that can increase engagement, improve the learning experience and help students learn and practice new skills.
Augmented Reality helps bring lessons to life and helps students remember essential details. For example, a teacher can use AR technology to create memorable interactive uses instead of presenting photographs on a projector showcasing life in Colonial America.
While the cost of AR equipment is often cited as a barrier to adoption, most smartphones today are equipped with the hardware needed to run AR apps. AR can lower educational costs by replacing expensive textbooks thereby making them easy to implement.
While AR offers several benefits, some common reasons cited for the slow adoption of the technology in the field of education are;
- Lack of funding
- Bulky AR equipment
- Concerns over AR educational content and its academic value
Use cases of AR in education
Here are the most prominent case uses of AR in the education sector.
- Star Chart is a notable example of AR in education for astronomy students. The app highlights the constellation on the screen when students point their devices at the sky and provides them with a detailed description of the constellation. The app includes information of over 12,000 stars and 88 constellations.
- Complete anatomy is a cross-platform app developed for med students and physicians. The app provides students with over 17,000 human body structures as 3D models. Students can interact with each of them conventionally as well as by projecting body parts on a flat surface.
- The JigSpace app can be a useful AR tool for both education as well as business fields. It helps you create a project demo in a short time. The app allows you to upload 3D models, place them on slides and adjust them according to your needs before presenting them to the audience.
Ways in which AR can be incorporated in the education sector
1. Augmented classrooms
Marker-based AR apps are a popular way of incorporating AR into the conventional classroom. Students can simply scan their textbooks and the app will provide them with illustrations on complicated theoretical explanations. Students can experience first-hand the principles of a subject in a highly fun-filled and interactive way. With AR, the quality of training in critical subjects like science, maths, technology, and engineering could improve vastly.
2. Augmented homework
With AR, teachers can assign students worksheets such that they can explore educational concepts at their own pace and from the comforts of their homes. For example, if the students are unable to crack the answers given in the worksheet, they can simply scan the worksheet with an AR-based app and get pointers towards the right answer.
Dispelling common myths about AR
1. AR is too futuristic
While AR has gained rapid growth in recent years, the term was first coined 25 years ago by Boeing researcher Thomas Caudell. Though many people may not be aware of AR, it is all around us. Interestingly, we don’t need to be technically oriented to experience AR. For instance, say you are watching the live broadcast of a swimming championship. The banners you see with the winners’ names floating on the water surface are nothing but AR.
Simply put, Augmented Reality offers unlimited opportunities and it can be used in many ways to create incredible audience engagement.
2. AR is highly expensive
On the contrary, AR can be cost-efficient compared to other media solutions. Often, in this industry, low price implies low quality especially if it is custom-made. However, in most cases, it could be a one-time investment and the returns can be far higher than what you expect. AR can help achieve a higher level of crowd engagement that is not possible using traditional forms of advertising.
3. AR is difficult to use
Previously, special headsets and programs were required to use AR. However, today, it is not the case. Technology has advanced and users can experience AR by simply pointing their phone’s camera at the source material. They can experience AR with a simple click!
Watch video: Augmented Reality in Education | Transforming Learning Experience
With advances in mobile technologies and hardware, Augmented Reality is becoming a more accessible and widely used technology. As it can be seen, AR has huge potential in the education sector and it is the right time to invest in it.
To get the most out of AR and a comprehensive strategy for your education business, feel free to contact us. | <urn:uuid:2b773312-d050-4d4b-acd7-f1ab160bf897> | CC-MAIN-2022-40 | https://www.fingent.com/blog/what-is-the-future-of-augmented-reality-in-edtech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00391.warc.gz | en | 0.944033 | 1,463 | 2.953125 | 3 |
Cybersecurity experts strive to enhance the security and privacy of computer systems. Quietly observing threat actors in action can help them understand what they have to defend against. A honeypot is one such tool that enables security professionals to catch bad actors in the act and gather data on their techniques. Ultimately, this information allows them to learn and improve security measures against future attacks.
Definition of a honeypot
What does “honeypot” mean in cybersecurity? In layman’s terms, a honeypot is a computer system intended as bait for cyberattacks. The system’s defenses may be weakened to encourage intruders. While cybercriminals infiltrate the system or hungrily mine its data, behind the smokescreen, security professionals can study the intruder’s tools, tactics and procedures. You might think of it as laying a trap for someone you know is coming with bad intentions and then watching their behavior so you can better prepare for future attacks.
Types of honeypots
In the world of cybersecurity, a honeypot appears to be a legitimate computer system, while the data is usually fake. For example, a media distribution company may host a bogus version of a film on a computer with intentional security flaws to protect the legitimate version of the new release from online pirates.
There are several different types of honeypots. Each has its own set of strengths. The kind of security mechanism an organization uses will depend on their goals and the intensity of threats they face.
A low-interaction honeypot offers hackers emulated services with a narrow level of functionality on a server. The objective of this trap is usually to learn an attacker’s location and nothing more. Low-interaction honeypots are low-risk, low-reward systems.
Unlike the low-interaction variety, a high-interaction honeypot offers a hacker plenty to do on a system with few restrictions. This high-interaction ploy aims to study a threat actor for as long as possible and gather actionable intelligence.
Technology companies use email traps to compile extensive deny lists of notorious spam agents. An email trap is a fake email address that attracts mail from automated address harvesters. The mail is analyzed to gather data about spammers, block their IP addresses, redirect their emails, and help users avoid a spam trap.
A SQL injection is a code injection procedure used to attacks databases. Network security experts create decoy databases to study flaws and identify exploits in data-driven applications to fight against such malicious code.
A spider honeypot is a type of honeypot network that consists of links and web pages that only automated crawlers can access. IT security professionals use spider honeypots to trap and study web crawlers in order to learn how to neutralize malicious bots and ad-network crawlers.
A malware honeypot is a decoy that encourages malware attacks. Cybersecurity professionals can use the data from such honeypots to develop advanced antivirus software for Windows or robust antivirus for Mac technology. They also study the malware attack patterns to enhance malware detection technology and thwart malspam like GuLoader and the like.
Pros and cons of honeypot use
Although there are many benefits of honeypots, they can also backfire if they fail to cage their prey. For example, a skilled hacker can use a decoy computer to their advantage. Here are some pros and cons of honeypots:
Benefits of using honeypots
- They can be used to understand the tools, techniques and procedures of attackers.
- An organization can use honeypots to ascertain the skill levels of potential online attackers.
- Honeypotting can help determine the number and location of threat actors.
- It allows organizations to distract hackers from authentic targets.
Dangers and disadvantages of using honeypots
- A clever hacker may be able to use a decoy computer to attack other systems in a network.
- A cybercriminal may use a honeypot to supply bad intelligence.
- Its use can result in myopic vision if it's the only source of intelligence.
- A spoofed honeypot can result in false positives, leading IT professionals on frustrating wild goose chases.
While there are pros and cons, careful and strategic use of a honeypot to gather intelligence can help a company enhance its security response measures and stop hackers from breaching its defenses, leaving it less vulnerable to cyberattacks and exploits. | <urn:uuid:eac7ea9f-da89-430f-be08-9bb766b12764> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2021/05/what-is-a-honeypot-how-they-are-used-in-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00391.warc.gz | en | 0.912269 | 927 | 3.515625 | 4 |
Ubuntu is an open-source Linux distribution that is based on Debian. While you can download a copy of the Ubuntu installation media and use it to set up a Ubuntu virtual machine, there is an easier option. Microsoft has greatly simplified the process of deploying Ubuntu virtual machines, particularly on Windows desktops.
In this article, you will learn how to quickly set up Ubuntu on a Windows 10 desktop.
Before we begin, this article assumes that you have already installed Hyper-V on your Windows 10 system. If Hyper-V is not currently installed, you can install it by opening the legacy Control Panel, then clicking Programs. Click the Turn Windows Features On or Off link. Now select the Hyper-V option, shown in Figure 1. Click OK, then follow the prompts to deploy Hyper-V.
Figure 1. You will need to install Hyper-V if it is not already set up.
Create an Ubuntu Virtual Machine
The primary tool for managing Hyper-V virtual machines is the Hyper-V Manager. The simplest way to access the Hyper-V Manager on a Windows 10 machine is to type “Hyper-V” into the search box at the bottom of the Windows desktop. Click on Hyper-V Manager within the list of results.
Generally, the copy of Hyper-V that comes with Windows 10 is identical to the one that is included with Windows Server. There are some exceptions, however. For example, there are enterprise-grade features such as failover clustering and replication that are not supported on the desktop version of Hyper-V.
The desktop version of Hyper-V has at least one feature that does not exist on the Windows Server version. The feature, Quick Create, is a tool designed to simplify the process of creating virtual machines. Quick Create allows you to set up new virtual machines without having to worry about manually provisioning virtual hardware or downloading operating system binaries.
Figure 2 shows what the Hyper-V Manager looks like. The Quick Create link is in the upper-right corner of the console (in the Actions section).
Figure 2. This is the Hyper-V Manager.
To create an Ubuntu virtual machine, click the Quick Create link. This causes Windows to open the Create Virtual Machine dialog box. As you can see in Figure 3, Microsoft provides shortcuts for different versions of Ubuntu. To get started, simply select the Ubuntu version you want to deploy, then click the Create Virtual Machine button.
Figure 3. Choose the Ubuntu release that you want to deploy and then click the Create Virtual Machine button.
Even though using Quick Create greatly streamlines the process of setting up a new virtual machine, the process can take some time to complete. That’s because Windows must download the operating system binaries and any other required components prior to beginning the installation process. When the deployment process eventually finishes, you will see a screen like the one shown in Figure 4.
Figure 4. The virtual machine has been created.
Clicking the Connect button opens the virtual machine’s console, whereas clicking the Edit button causes Windows to open the Hyper-V Settings page for the newly created virtual machine. You can use the Settings page to adjust the virtual hardware allocation (e.g., add more memory to the virtual machine). However, the default settings are typically adequate unless you plan to run a resource-intensive workload within the virtual machine.
The only thing left to do at this point is to finish setting up Ubuntu. To do so, just connect to the virtual machine (you may need to start the virtual machine), then follow the prompts.
As you can see in Figure 5, for example, you will need to specify the language that you want to use.
Figure 5. Complete a few minor configuration tasks and Ubuntu is ready to use. | <urn:uuid:da71ebf5-c0c7-449b-92be-b741af3a2323> | CC-MAIN-2022-40 | https://www.itprotoday.com/compute-engines/how-create-ubuntu-virtual-machines-easy-way | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00391.warc.gz | en | 0.851212 | 774 | 2.609375 | 3 |
Cloud computing services are becoming mandatory parts of the modern business world. Most organizations are using one or more types of cloud-based services, whether that’s Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software-as-a-Service (SaaS).
SaaS is the most common model among cloud computing services, but IaaS and PaaS serve equally important functions for businesses. IaaS and PaaS facilitate the demand of end users to collect, store, and process a large amount of data. In this article, we discuss IaaS vs PaaS for a better understanding of these cloud-based services.
Read more: Creating a Cloud Strategy: Tips for Success
What Is IaaS?
Infrastructure-as-a-Service (IaaS) is a form of cloud computing that provides virtualized computing resources to consumers over the internet on a pay-as-you-go and on-demand basis. These virtualized resources include essential computing, storage, and networking resources.
IaaS helps consumers gain real-time business insights without the higher maintenance costs of on-premises data centers and hardware. IaaS gives users the flexibility to scale IT resources up and down as needed.
It also helps users quickly provision new applications and increase the reliability of underlying infrastructure. IaaS is easier to use, faster, more flexible, and cost-efficient. The cloud provider manages IT infrastructure, delivering services to subscriber organizations through virtual machines accessible over the internet.
When Should You Use IaaS?
IaaS is an alternative to on-premises infrastructure that specifically helps network architects and system administrators. Here are the primary use cases for IaaS:
- You want to have control. With IaaS, providers manage servers and storage, but your organization gets to manage everything running on the infrastructure.
- Your company is growing. With IaaS, you can make changes as your needs evolve, or depending on traffic spikes and valleys.
- You want to increase your stability, reliability, support, and security. With IaaS, there’s no need to maintain and upgrade hardware, or troubleshoot equipment problems.
Examples of IaaS providers include:
Azure Virtual Machines
Virtual Machine Manager
Alibaba Elastic Compute Service
Read more on ServerWatch: Best Cloud-Based Services & Companies
What Is PaaS?
Platform-as-a-Service (PaaS) is a category of cloud computing that provides users a complete cloud-based platform for developing, running, and managing their applications. These services are typically associated with developing and launching applications, allowing developers to build, maintain, and package such software bundles.
In PaaS, a third-party provider delivers hardware, software tools, and infrastructure to users over the internet. Usually, these are used for application development. Users can purchase the resources as needed from a service provider on a pay-as-you-go basis, accessing them over a secure internet network. The users manage the applications and services they develop, and the cloud service provider typically manages everything else.
When Should You Use PaaS?
PaaS is an alternative to traditional hardware and software development tools to help developers and deployment. PaaS use cases include:
- You need to build software, and you have resources. If you don’t want the trouble of building servers, networks, and managing databases. In this situation, PaaS facilitates the virtual platforms and tools to create, test, and deploy your applications and services.
- Multiple remote developers are working on the same project. PaaS can provide you with a great environment, speed, and flexibility for your entire process, regardless of where your developers are located.
- You are rapidly developing or deploying an application. PaaS can help reduce costs and simplify the challenges associated with quickly shipping an application.
Examples of PaaS providers include:
- AWS Elastic Beanstalk
- Oracle Cloud Platform
- Google App Engine
- Microsoft Azure
- Salesforce aPaaS
- RedHat OpenShift
- Mendix aPaaS
- IBM Cloud Platform
- SAP Cloud Platform
- Engine Yard
IaaS vs PaaS: What’s the Difference?
IaaS and PaaS are both cloud-based options designed to relieve the company and IT department of their responsibilities when it comes to handling data, software, OS, virtualization, servers, storage, and networking. However, there are several differences when considering IaaS vs PaaS.
An IaaS provider offers a virtual data center to store company information and create platforms for services and application development, testing, and deployment. On the other hand, a PaaS provider offers a virtual platform and the tools to create, test, and deploy applications and services.
IaaS allows end users to manage their applications, the platforms used to build them, and the cloud-based resources to keep everything running — such as OS, middleware, runtime environment, applications, and data. On the other hand, PaaS allows end users to manage the apps they develop with the tools provided by the cloud platform.
End User Security Responsibilities
The IaaS users are responsible for securing their data, user access, applications, operating systems, and virtual network traffic. On the other hand, the PaaS users are responsible for securing their applications, data, and user access.
Vendor Security Responsibilities
The IaaS vendors are responsible for enforcing secure access controls to the physical facilities, IT systems, and cloud services. On the other hand, the PaaS vendors are responsible for securing the operating system and physical infrastructure.
Flexibility and Cost
IaaS is very flexible, but it’s the most expensive form of cloud computing. On the other hand, PaaS is flexible within certain limitations, and mid-tier in cost.
Choosing the Right Solution
The biggest advantage of IaaS is that solutions can easily scale with businesses as they grow, or downsize if necessary. On the other hand, the biggest advantage of PaaS is its ability to save developers significant time throughout a project.
In cloud computing, IaaS is often the first step in hybrid-cloud and multi-cloud strategies, whereas PaaS is a step toward Infrastructure as Code. IaaS and PaaS are designed to achieve different goals for different types of users. When it comes to IaaS vs. PaaS, depending on your organization’s needs you may not have to choose. | <urn:uuid:4870c225-9e67-42fa-b716-dc4e6403ab9e> | CC-MAIN-2022-40 | https://www.cioinsight.com/cloud-virtualization/iaas-vs-paas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00391.warc.gz | en | 0.907167 | 1,423 | 3.046875 | 3 |
Biometrics, in the simplest sense, is the measurement of the human body. It is the science of analyzing physical or behavioral attributes distinct to each person in order to be able to validate their individuality. Biometrics authentication is replacing traditional passwords and PINs day by day and within four years we can see rapid adoption of it according to a study governed by Strategy Analytics.
There are different methods of biometric data gathering and reading according to your requirements. There is no one method which does the perfect job for everyone. Government agency or law enforcement agency might require some form of scalable biometric authentication methods whereas a small business farm could do with another form of authentication methods. These methods are also known as modalities. Here are 3 of the top modalities which will help you decide the best modality for your business:
Fingerprint technology is the oldest form of biometric authentication method and it is one of most accurate. It has a near-universal physical trait which reduces cases of fraud and provides safe access to specific places. It is a very popular and widely used technology due to its near-universal physical trait and they are small and inexpensive. Applications of this method can be found in building and car doors, border control and other highly secured places like military bases, government agencies etc.
Fingerprint technology will be beneficial for any organization if it can be correctly deployed. With this technology employee identification and workforce management gets speedier, efficient and accurate. Employees do not have to carry magnetic strips or remember their passwords as they are carrying their fingerprint with them always. Fingerprints cannot be lost or forged. With the implementation of fingerprint authentication method, organizations can save a huge amount of money by preventing buddy punching and ghost employees. The technology greatly diminishes such fraudulent behavior.
Fingerprint biometric authentication method is the most dominant form of biometric technology and it is forecasted that by 2020 the market will hit approximately $25 billion.
- Finger Vein
Hitachi, a Japanese information technology company, first introduced the finger vein recognition system back in 2004. They pounce on the chance by finding out that near-infrared lights can be used to scan finger veins. It is the same technology in-principle as fingerprint technology. But the finger vein technology has greater accuracy rate than fingerprint technology. The technology has lower False Rejection Rate (FRR) as well as lower False Acceptance Rate (FAR). Additionally, finger vein authentication method doesn’t require you to place your finger on a scanner for the purpose of scanning. So it is more hygienic and has a low maintenance cost than all the other methods which require contact, hence cleaning the scanner surface on a regular basis. By not touching any scanning surface the subject is not leaving any latent prints. It is greatly diminishing the chance of any culprits to duplicate by forging or lifting. Finger vein’s one of the vital traits is it never gets affected by anything. The subject’s biometric stored data can be used for a lifetime as it is a technology which scans beneath the finger surface.
If you are living in Chicago or Montreal, you will see people taking out cash from ATMs by just looking at them. The biometric recognition system these financial organizations adopted is called iris recognition and it was first created by John G. Daugman. Iris is an organ which is located in human eyes and its structure remain same all throughout a person’s life. Due to this reason, the biometric industry has quickly adopted the technology of its very own and it is one of the most trusted authentication methods to identify a person.
Mechanism of iris recognition is simple. First, the device detects the location of the pupil followed by detection of individual’s iris and eyelids. Then to keep only the iris part from the image, unnecessary parts like eyelids and eyelashes are clipped out. Then the remaining part which is iris portion of the high-quality image is divided into blocks and converted into biometric values to quantify the image. The data then kept on a matching server to use whenever the specific person is needed to be identified by the system.
Iris authentication method is getting many accolades nowadays because it is highly accurate and fast, remains unchanged throughout life, recognition can be performed separately by each eye since there is a difference between the right and left eye, can differentiate even twins, recognition can be done anytime whether it is day or night or the subject is wearing a mask, hat, spectacles or gloves (as long as the eyes can be scanned) and most importantly it is hygienic due to its contactless attributes.
Additionally, there are other popular biometrics authentication methods such as face recognition, hand geometry biometrics, retina scan, signature, voice analysis etc. Even as surprising as it sounds authentication through heart, ear, eye movements can be performed nowadays because of the technological advancement in the industry. The biometric industry is growing rapidly and globally. According to ReportBuyer, for the forecast period of 2018-2026, the market is growing at a compound annual growth rate (CAGR) of 17.41%. The market growth is being driven by advances in mobile biometric systems, increasing incidences of identity theft, and growing applications of biometrics. | <urn:uuid:cd9e85b1-c104-4288-b713-54986c401b5c> | CC-MAIN-2022-40 | https://www.m2sys.com/blog/biometric-resources/what-are-the-biometrics-authentication-methods-to-watch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00391.warc.gz | en | 0.952977 | 1,088 | 2.546875 | 3 |
Introduction: DevOps vs SRE
After Google introduced Site Reliability Engineer into the software development process, the whole IT industry was confused and had this question “What is the difference between the DevOps and SRE?”
In short, there is not much difference between the DevOps and SRE. Today in this article you will get to know how they both similar and different from each other. Let’s start with a small introduction to the DevOps and SRE.
What is DevOps?
The word DevOps is expanded as the Development Operations. It refers to the framework followed by the IT companies to produce the software or applications as per the customer requirements. It involves agile and automotive infrastructure to ensure a fast return on demand.
When the customer makes a request the DevOps team starts working on them and aim to make fast and quick delivery. It covers the whole Software Development Life Cycle (SDLC) from planning to final testing.
They practice many automation techniques like machine learning and Artificial Intelligence to make create a continuous and qualified delivery. It is the direct successor of the Agile Software Development process.
In Short, DevOps or Development operations refers to a framework adopted to reduce the barriers the traditional development operations.
What is SRE?
SRE refers to the Site Reliability Engineering or Engineer. It is a discipline which is created by fusing the software engineering with administration, infrastructure and operation problem of the organization.
An SRE is none other than, an administrative expert who has basic programming or software knowledge to create a solution for the operative issues and development process. He helps in achieving scalable and highly reliable software systems.
It is not common to have a Site Reliability Engineer in organizations, only the big organizations or sites that host massive servers that process large data. An SRE shares the major principles of the DevOps.
A Site Reliability Engineer spent nearly 50% of his time in the “operations” which is shortly called “Ops” in the IT field. He will be engaged in the works like on-call, monitoring, issues, manual supervision, and intervention. And he spends the other 50% on the development process.
Difference between DevOps and SRE:
It is hard to differ both of them as they are used side-by-side in many combinations. SRE and DevOps are similar by adopting these five major principles.
- Reducing organizational issues
- Measuring everything
- Implementation of gradual change and agile development
- Accepting failure
- Automation and innovation.
However, there are some differences between them.
Comparison Table: DevOps vs SRE
|Development Operations||Site Reliability Engineer/Engineering|
|Definition||It is the framework adapted in an Organization to fast deliver the software or applications as per the customer needs.||It combines the software engineering concepts with the operations of the organizations and acts as a bridge between Development and Operations team.|
|Scope of work
|DevOps includes development, remodeling, and fast delivery of Applications||SRE involves 50% of operations and automation works and 50% of development works.|
|Continuous and fast App Development||To ensure the scalability, performance, and reliability of the software.|
|Focus on implementation of new automation tools and meeting the final customer requirement.||SRE focus on inducing new methods and automation in DevOps functions.|
|DevOps is the first stage of the production process.||It is part of DevOps that focuses more on the automation and performance of the software.|
|Types of Approaches
|Simple DevOps and DevSecOps (Integration with Security Operations)||SRE can be combined with many DevOps personals like Release engineer, production engineer, etc…|
|Dependency||DevOps is not dependent on SRE. However, SRE helps to improve the DevOps performance.||SRE is dependent on DevOps; the operation of SRE varies based on the existing DevOps operation.|
|Way of processing||Comparatively less automated and manual intervention. 9||Automation monitored by the administrative expert.|
|Knowledge Requirements||Wide knowledge of different Script languages and specialization anyone (preferably Python)||A Site Reliability Engineer should excel both in administration and software engineering.|
|Salary estimate||The annual salary of the DevOps Engineer varies from $91,666 to $155,000. Based on his experience.||As a Site Reliability Engineer, you can expect an Annual Salary of $78,000 – $90,000.|
Download the comparison table here.
In the end, it is all about data and software development. If you have any further questions or ideas please share them in the comment section below. | <urn:uuid:0930c307-0ac0-40df-a6f0-fa512191b0c6> | CC-MAIN-2022-40 | https://networkinterview.com/devops-vs-sre-site-reliability-engineer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00391.warc.gz | en | 0.932736 | 1,044 | 2.609375 | 3 |
March 10, 2017
Virtual reality in healthcare is one of the most anticipated emerging technologies today. By placing a medical professional into a virtual environment, surgeons, nurses, and doctors can gain a greater insight into the anatomy of their patients. For example, using virtual reality, a medical student may be able to explore the workings of the vascular system in a three dimensional educational exercise. The result would be a firmer understanding of the network of arteries and veins in our bodies. That’s just the beginning of virtual reality applications in healthcare.
While an educational tool may be an obvious application for virtual reality, hospitals around the world are already deploying the technology in unexpected ways. Here, we’ve gather just a handful of the surprising situations in which virtual reality is making a difference in our actual reality.
Virtual Reality In Healthcare
- Providing an Escape for Patients: When a patient is in the hospital for an extended period of time, there isn’t much to do. They must stay in bed and count the days until their recovery. Meanwhile, the only thing the patient has to focus on is his/her pain and condition. In such instances, patients turn to television or books as a form of relaxation and escapism. Virtual reality can fill that role in a much more substantial way. Rather than lying in bed, the patient can explore natural wonders, fly through the sky, or swim beneath the ocean. A research laboratory called Cedars-Sinai is engaging in research on the therapeutic value of virtual reality with promising results.
- Increasing Physician Empathy: Empathy hinges on someone’s ability to step into someone else’s shoes. Embodied Labs has developed a VR application called “We Are Alfred” which allows a physician or medical student to experience what it is like to grow old. The user assumes the role of “Alfred,” a 74-year old man with audio-visual impairments, for a period of seven minutes. The narrative involves Alfred’s family confronting him about his impairments and taking him to the doctor for treatment. Ultimately, Alfred is told that the impairments are permanent. It is an emotional journey for the user and gives them a unique perspective on the aging process.
- Stroke Recovery: After a stroke, a patient may be left partially paralyzed, and only through extensive physical therapy can the patient regain some of their motor functions. MindMotionPro is a VR application that allows the patient to “practice” moving their limbs. Essentially, the application will prompt the user to perform simple motor tasks with their hands, like grasping a virtual ball and placing it in a bucket. While the patient may be unable to perform this maneuver with, for example, his/her right arm, he/she can complete the activity with his/her left arm, and the application will show the patient’s paralyzed arm in motion. This sort of “mirroring” is hypothesized to increase the patient’s attention, motivation, and recovery rate.
A New World of Recovery
Clearly there is value in virtual reality technology. The ability to transpose oneself into a whole new world is an excellent tool for providing perspective, increasing empathy, and even speeding recovery. We are only beginning to discover the potential of virtual reality, and in ten years’ time, it may become commonplace in hospitals around the world.
Like what you read?
Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges. | <urn:uuid:d0798270-b20b-4bd8-931e-f1175833ecdd> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/3-virtual-reality-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00391.warc.gz | en | 0.943139 | 845 | 2.96875 | 3 |
Brain Training done right. Learn more in this article!
A mother and son sit down to a game of memory. The little boy correctly detects three pairs of memory cards. His mother, on the other hand, is trying to do so. The little boy sees the game as a big picture and notes details, while his mother tries to find the right card logically. As a result, she does not make optimal use of her brain.
Make use of it!
With “Use it or lose it” — Vera F. Birkenbihl describes a vital learning and thinking process. It’s necessary to use your knowledge and the various areas of the brain regularly. Birkenbihl applies the metaphor of a chest of drawers. The more you open a drawer, the easier it is to open. If it remains closed for a long time, then it’s more challenging to open, the sliding rails are rusty and bulky.
However: typical memory training, such as crosswords are largely pointless. At some stage, writing down the answers will be an automatic process, and it will no longer be necessary to think about it — you won’t need the control functions of your brain anymore. Only when using the frontal lobe will you benefit from a training effect. But how do you know whether your frontal lobe is active? You’ll notice it by being tired. After some time of effort, you will be tired and need a break.
The neuropsychologist Lutz Jäncke recommends training our attention and concentration regularly as we age. As we age, the brain loses its function and volume, but it can still grow even in adulthood.
Other simple exercises are useful, for example: When you’re at a red stoplight, think your way back. How many streets have you crossed? Try to choose a different traffic light each time!
Activate your brain by carrying out new intellectual tasks every day: turn the newspaper upside down and read it, write with your other hand, take another way to work, translate a foreign language text into your mother tongue.
Brain Training with a sense of purpose and fun
Anything you are trying to remember should also make sense. Learning a series of numbers by heart is not considered to be useful. Learning something new must have a purpose to memorize it. For example, learning a new language makes sense. The newly acquired knowledge may be applied during the next holiday or at the upcoming international company presentation.
It is also important for the brain that the training is fun. By having fun, your brain releases neurotransmitters, such as dopamine. It means that impulses pass on to nerve cells — therefore, they are vital for learning to work.
The best way to train your brain is to learn foreign languages.
In addition to learning an instrument in which many different brain areas become active, foreign language learning is one of the most effective memory training. Older people who have spoken different languages — who can apply them without struggling — stay mentally fit longer. A person who regularly uses foreign languages, or learns new words and broadens their vocabulary, is a true brain jogger.
Learning a foreign language trains your brain — almost as if your brain suddenly becomes a super-organic matter. These strong mental muscles increase your concentration and make you more creative and analytical, a bright person who is well organized and rational. In summary: learning a language can change you and your brain completely and keep you fit.
Yet memory scientists know: solely cramming is not enough. If you learn a vocabulary list, it is a mere memory task. If, on the other hand, you try to understand a whole sentence in the foreign language, your brain cells and synapses need to step up and reflect on it. The whole brain is called upon.
Everyone can learn languages. Don’t pretend you are too old to learn a language. Motivated adults, who invest enough time, can learn at the same pace as young people. By the way: researchers have found out that language acquisition and its regular use can reduce the likelihood of Alzheimer’s disease and delay the onset of dementia by several years.
If you haven’t enjoyed learning a language so far, then you might want to change the way you learn. Cramming vocabulary has not been the recommended learning method for a long time. Learning a foreign language is a lot more fun with our MOVIE© language courses by Birkenbihl: watch a comedy sitcom and learn a foreign language simultaneously. Start the video to test the method. Listen to the speaker and read the word for word translation in the lower line. As a result, you can learn the meaning of the words without cramming vocabs (after the Birkenbihl Approach). Also, you automatically determine the structure of the language — the use of the words, grammar, sentence structure, etc. The dialogue and interaction between actors also present some cultural aspects. And most importantly: it is enormous fun. Watch the video! | <urn:uuid:12afd340-989f-4a3f-83a0-04db8bafa92f> | CC-MAIN-2022-40 | https://blog.brain-friendly.com/language-acquisition-as-brain-training-become-smarter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00391.warc.gz | en | 0.943948 | 1,031 | 3.3125 | 3 |
In 2014, 71% of organizations reported having been affected by a cybersecurity attack. Phishing and malware scams have proven to be two of the top threats impelling businesses to attempt to safeguard their network security. The methods hackers are using to fool victims into doing things such as clicking on a malicious link or downloading fake software are becoming increasingly difficult to identify as threats. As 2016 commences, businesses will be required to reevaluate their network security protocols and make changes to suit the evolution of technology and cybercrime. The following sections outline a few of the main problems companies will encounter when assessing their security capabilities for the future.
End-user education and cybersecurity threat vulnerabilities
End-user education has become a primary focus for businesses trying to control cyber threats as hackers continue to use social engineering to blindside users. In addition, experts say the method of personally targeting victims with scams tailored to their online presence will become increasingly popular. As a result, businesses must provide thorough end-user education to their staff on identifying such attempts. Employees are often a company’s most significant security risk. In today’s technology climate of rapid development and continuous change, hackers are given new ways to target and profit from an organization’s network every day. A business can spend thousands of dollars implementing technical security measures. Still, with one click, an employee can allow a hacker to bypass those defenses, paving the way for a network compromise despite the company’s efforts. Typically, a business has several security procedures that specify how employees are to use their devices and browse the Internet. Most users are already wary of scam attempts such as emails claiming they have won a million dollars or phone calls saying they have been selected as a finalist to be on television. However, hackers have harnessed the power of social media to engineer customized methods for misleading users. Phishing emails, malicious phone calls, and fake applications are just a few of the ways cybercriminals have targeted organizations and their staff. For example, an office administrator may receive an email that appears to be from the CEO of the company asking for money to be wired to a particular location. The CEO may be on a business trip, and the context of the request may align with business activities, helping to disguise the scam as a legitimate request. Such scams can be elusive as the email addresses, logos, and employee roles the scammer utilized may fit the company. However, grammar errors, spelling mistakes, or outdated branding materials can help reveal a hacker before gaining access to an organization’s systems.
IT resources and skills gap
Hyperconnectivity is already presenting a challenge to businesses trying to secure their networks. As cyberattacks become more sophisticated, companies will need to employ information security professionals with the knowledge and capabilities to keep pace with hackers. A business may have technical support capable of installing firewalls or updating software. Still, there will be a growing necessity for IT professionals who can align cybersecurity with business development. Such individuals will need to be capable of managing a breach and evaluating and understanding what went wrong so they can develop a suitable IT strategy for the future of the organization. In addition, a company should assess the knowledge and skills of its technology resources to determine its ability to take advantage of innovative tools while maintaining stringent security.
IoT security strategies
IoT devices have been primarily experimental in the past, but as this technology sector develops, so will its use in business. For example, smartwatches are predicted to become further integrated into enterprise operations as developers design more process-friendly platforms to host business applications. When such products were being designed, manufacturers were often rushing to distribute the new technology and did not consider the extensive security risks entailed. The increasing use of IoT devices in organizations worldwide will prompt developers to integrate more advanced defense mechanisms and require businesses to design device management and use protocols to address evolving security threats. Consumers have already begun to experience the integration of IoT devices into their daily lives as cars, wearable devices, household appliances, televisions, and more become interconnected. Hackers are eager to exploit these interdependencies. For example, Charlie Miller and Chris Valasek received a significant amount of attention in the summer of 2013 for their successful remote hacking of two vehicles using their laptops. In 2015, the two security engineers were able to take control of their friend’s Jeep Cherokee as it was being driven on the highway. Miller and Valasek completed this while using their laptops and sitting on their couch at home. While people may not be concerned about hackers infiltrating their IoT refrigerator or toaster, losing control of a vehicle elicits more significant concern, and law enforcement officials have already begun to encounter such crimes in the real world. The Houston Police Department recently posted a YouTube video that shows two suspects within 12 minutes shutting off the alarm of a Jeep Wrangler Unlimited and successfully reversing the vehicle out of the driveway via a laptop. Features consumers generally consider harmless, like Bluetooth radio, wireless Internet, and even a cell phone connected by a USB cable, can allow hackers to access a car’s internal computer system. Essentially, anything connected to the Internet is prone to be compromised. Organizations must consider how the development and use of IoT devices will change the landscape of network security management. You can use smartwatches, fitness trackers, Internet-connected home devices, and more to compromise a company’s IT network. You must implement sufficient physical, technical, and administrative safeguards to secure a business against the threats accompanying IoT technology effectively. We know and support your business needs to embrace the latest technology developments while keeping your IT network safe from cybercriminals. Check out our IT security services and keep your business ahead of the curve and out of the reach of hackers. | <urn:uuid:1f42cebc-e420-49ec-b5be-844c161114df> | CC-MAIN-2022-40 | https://aldridge.com/future-business-cybersecurity-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00391.warc.gz | en | 0.954425 | 1,157 | 2.515625 | 3 |
Role Of Blockchain Technology In Education
Impact of Blockchain On Education.
Through its tenets of decentralization, transparency, and security, Blockchain has created a new type of internet. It has radicalized the way transactions occur and continues to make great strides in virtually every industry imaginable. Although best known for Bitcoin and Cryptocurrencies, the scope of Blockchain technology doesn’t end there by any means. It has found application in Finance, Healthcare, Real Estate, and many more industries.
In this blog, we will discuss how blockchain technology can have a significant impact on Education.
What Is Blockchain And How Is It Creating an Impact?
The word blockchain comes from the core principles of this technology, where secure blocks of data are bound together in chains using cryptographic principles. The record or ledger of data is time-stamped and immutable and is managed not by a single authority but by a cluster of computers. This enables a verifiable and decentralized record of transactions between two people.
What makes blockchain unstoppable is that this record is public and distributed between many computers called nodes. This allows for validation of the data from all the computers in the network, making it open and transparent, at the same time ensuring that no single entity controls the flow of information or transactions. A vital aspect of this information flow is that digital information can never be copied, only distributed. This makes the blockchain technology a fortified channel for information flow.
Let us look at a few reasons why blockchain technology is becoming one of the most impactful technologies today.
1. Unmatched Security
Transaction verification is one of the key aspects of blockchain technology. A transaction must be requested through a wallet and sent to all the computers in a blockchain network. Each of these nodes or computers must verify the transaction against a set of predetermined rules in that network. The information is then stored in a block and encrypted with a hash. Once this hash is verified by the nodes, the information stored in that block is permanent, immutable and secure. Any alterations to the data by hackers ultimately alter the hash and the entire chain of transactions linked to the hash. This means that the information cannot be altered or misused by anyone. Another added layer of security is Public-key cryptography, where there are two keys – a public key, which others can know and a private key exclusively for the owner. This way every transaction is secure and critical information required by financial services, governments, and other entities are protected.
2. Better Transparency
Blockchain technology makes transaction histories transparent. Being a type of distributed ledger, the nature of the blockchain technology permits changes to be made only through the consensus of all network participants who share the same documentation. Even a single change in the transaction record would mean that all subsequent records will have to be altered. The change will also require the collusion of everyone in the network and that means everyone is aware of any change. This ensures transparency and places a high level of accountability for everyone who handles the document. The transaction history also provides an audit trail of where the information originated from, and every instance of when transactions occurred and changes were made. This helps in ensuring the authenticity of the assets and prevention of fraud.
3. Increased Cost and Time Efficiencies
One of the main reasons for time and cost efficiencies with blockchain is that it cuts out the need for third-party mediation. Instead of relying on third-party intermediaries for verification and the movement of information, blockchain uses cryptology to enable direct transactions between two parties. Proving the authenticity of ownership of an asset is done through the blockchain and the nodes in the network instead of the time-consuming process of knocking on the doors of central authorities or other intermediaries. Recordkeeping through a single ledger also cuts out the clutter of error-prone manual processes, thereby saving time and money.
Blockchain Revolutionizes Education
As discussed, security, transparency and time and cost efficiencies are some of the reasons why blockchain has found application across industries. In the words of Don and Alex Tapscott, authors of Blockchain Revolution (2016): “The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.”
Now let’s see how this applies in the field of education.
1. Student E-portfolio
The importance of credentialing in the field of education has become more apparent in this day and age where holding a degree just doesn’t cut it anymore. Employers want to see evidence of what the student is capable of outside academic goals like is obtaining a degree. Speaking about the need for universities to keep up with this demand, David Schejbal, the Vice President and Chief of Digital Learning of Marquette University, says: “What we really need is a broad set of credentials that are able to provide the kind of just-in-time learning that many folks need throughout their lives. We need transcripts that reflect transcripts that help students document in some verifiable way, what they’ve demonstrated they can do.”
Blockchain makes this possible by providing secure and verified credentialing and record keeping of student information. This is provided as a model where all of a student’s competency indicators are collected and shared securely. These indicators will include badges, certificates, letters of recommendations, citations and other details which add to the credentials of a student.
The Ledger Project is an interesting concept that looks ahead at a revolutionized education system in the year 2026. In this scenario, all your learning credentials are tracked on “Edublocks” through blockchain technology. This will help employers match their exact needs with the right student candidate and also help students obtain scholarships and funding for their education.
Blockchain is finding a practical use in this context, even today as major players like Sony Global Education with its use of blockchain to securely share student records and MIT with an open standard for verifiable digital records.
2. Cost Savings on Courseware and Research material
Another area where blockchain can be useful in education is in making courseware accessible and affordable. One way is in cutting out intermediary fees in the purchase of software. EBooks, for example, can be fitted with blockchain code and shared through the network. This will eliminate the fees charged by portals like Amazon and credit card fees. The books could be accessed straight from the authors themselves. This would mean major savings for both students and authors. Video tutorials and much more can be accessed this way.
3. Copyright And Digital Rights Protection
The unmatched security provided by Blockchain is an asset when it comes to protecting intellectual property. WIPO’s Patent Cooperation Treaty (PCT) passed a record-breaking filing mark in 2018 with 253,000 patent applications. This was an increase of 3.9% in 2017. The amount of intellectual property being created is extraordinary and many researchers, academics and students are contributing to the innovation pool daily. Blockchain will help them create, share and control their intellectual property in the way that they want. San Jose State University is an example of how blockchain is being used to create “community content repositories”. With its Library 2.0 movement, the university is enabling effective curation of digital contents and protection of digital rights.
There are many other ways in which blockchain technology is influencing the Education sector. Reach out to us and find out how this can be leveraged for your company. | <urn:uuid:28c2b348-b682-4db3-8af5-905383ef2011> | CC-MAIN-2022-40 | https://www.fingent.com/blog/role-of-blockchain-technology-in-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00591.warc.gz | en | 0.933549 | 1,508 | 3.109375 | 3 |
Learn how to generate novel solutions and insights to problems which are called strong "light bulb moment"
Our brains are our most valuable assets, so optimizing them is paramount to achieving leadership success. In essence, we want to generate strong "light bulb moments," which are novel solutions and insights.
Learn how to become an outstanding mental athlete by planning "think time," and cultivating/growing ideas by setting up the right environment.
Lack of Insight (LOI) Syndrome and Weak Light Bulb Moments
Our brains can produce ideas that are truly unique and innovative which is valuable in a world run on innovation and change.
Leaders must do a lot of strategic thinking. This is a special kind of thinking to solve a problem or capitalize on an opportunity. Curiosity is asking key and relevant questions without making any underlying assumptions or limits.
All leaders need to keep on top of developments in their industry, but avoid practicing learning time during regular working hours because your mind may not be in the right mode to maximize learning and create a strong light bulb moment. | <urn:uuid:47d7720b-942f-45d0-835e-3a1b06d5574f> | CC-MAIN-2022-40 | https://www.givainc.com/wp/how-to-inspire-the-mind-of-it-company-leaders.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00591.warc.gz | en | 0.950694 | 219 | 2.65625 | 3 |
- A Trojan horse is anything that introduces risk to an organization through something that appears to be positive
- Measuring the wrong variables is a Trojan horse that infiltrates virtually every organization
- This phenomenon has a real cost that can be measured – and avoided
The Trojans stood at the walls, drunk from victory celebrations after they had previously watched the Greek fleets set sail away in retreat, having been defeated after nearly 10 years of constant warfare. They had little reason to suspect treachery when they saw the massive wooden horse just outside their gates, apparently a gift offering from the defeated Greeks. Because of their confidence – or overconfidence – they opened the gates and claimed the wooden horse as the spoils of war.
Later that night, after the Trojans lay in drunken stupor throughout the city, a force of Greek soldiers hidden in the horse emerged and opened the gates to the Greek army that had not retreated but had actually lay in wait just beyond sight of the city. Swords drawn and spears hefted, the Greek soldiers spread throughout the city and descended upon its people.
The end result is something any reader of The Illiad knows well: the inhabitants of Troy were slaughtered or sold into slavery, the city was razed to the ground, and the term “Trojan horse” became notorious for something deceitful and dangerous hiding as something innocuous and good.
Organizations are wising up to the fact that quantitative analysis is a vital part of making better decisions. Quantitative analysis can even seem like a gift, and used properly, it can be. However, the act of measuring and analyzing something can, in and of itself, introduce error – something Doug Hubbard calls the analysis placebo. Put another way, merely quantifying a concept and subjecting the data to an analytical process doesn’t mean you’re going to get better insights.
It’s not just what data you use, although that’s important. It’s not even how you make the measurements, which is also important. The easiest way to introduce error into your process is to measure the wrong things – and if you do, you’re bringing a Trojan horse into your decision-making.
Put another way, the problem is an insidious one: what you’re measuring may not matter at all, and may just be luring you into a false sense of security based on erroneous conclusions.
The One Phenomenon Every Quantitative Analyst Should Fear
Over the past 20 years and throughout over 100 measurement projects, we’ve found a peculiar and pervasive phenomenon: that what organizations tend to measure the most often matters the least – and what they aren’t measuring tends to matter the most. This phenomenon is what we call measurement inversion, and it’s best demonstrated by the following image of a typical large software development project (Figure 1):
Some examples of measurement inversion we’ve discovered are shown below (Figure 2):
There are many reasons for measurement inversion, ranging from the innate inconsistency and overconfidence in subjective human assessment to organizational inertia where we measure what we’ve always measured, or what “best practices” say we should measure. Regardless of the reason, every decision-maker should know one, vital reality: measurement inversion can be incredibly costly.
Calculating the Cost of Measurement Inversion for Your Company
The Trojan horse cost Troy everything. That probably won’t be the case for your organization, as far as one measurement goes. But there is a cost to introducing error into your analysis process, and that cost can be calculated like anything else.
We uncover the value of each piece of information with a process appropriately named Value of Information Analysis (VIA). VIA is based on the simple yet profound premise that each thing we decide to measure comes with a cost and an expected value, just like the decisions these measurements are intended to inform. Put another way, as Doug says in How to Measure Anything, “Knowing the value of the measurement affects how we might measure something or even whether we need to measure it at all.” VIA is designed to determine this value, with the theory that choosing higher-value measurements should lead to higher-value decisions.
Over time, Doug has uncovered some surprising revelations using this method:
- Most of the variables used in a typical model have an information value of zero
- The variables with the highest information value were usually never measured
- The most measured variables had low to no value.
The lower the information value of your variables, the less value you’ll generate from your model. But how does this translate into costs?
A model can calculate what we call your Overall Expected Opportunity Loss (EOL), or the average of each expected outcome that could happen as a result of your current decision, without measuring any further. We want to get the EOL as close to zero as possible. Each decision we make can either grow the EOL or shrink it. And each variable we measure can influence those decisions. Ergo, what we measure impacts our expected loss, for better or for worse.
If the variables you’re measuring have a low information value – or an information value of zero – you’ll waste resources measuring them and do little to nothing to reduce your EOL. The cost of error, then, is the difference between your EOL with these low-value variables and the EOL with more-valuable variables.
Case in point: Doug performed a VIA for an organization called CGIAR. You can read the full case study in How to Measure Anything, but the gist of the experience is this: by measuring the right variables, the model was able to reduce the EOL for a specific decision – in this case, a water management system – from $24 million to under $4 million. That’s a reduction of 85%.
Put another way, if they had measured the wrong variables, then they would’ve incurred a possible cost of $20 million, or 85% of the value of the decision.
The bottom line is simple. Measurement inversion comes with a real cost for your business, one that can be calculated. This raises important questions that every decision-maker needs to answer for every decision:
- Are we measuring the right things?
- How do we know if we are?
- What is the cost if we aren’t?
If you can answer these questions, and get on the right path toward better quantitative analysis, you can be more like the victorious Greeks – and less like the citizens of a city that no longer exists, all because what they thought was a gift was the terrible vehicle of their destruction.
Learn how to start measuring variables the right way – and create better outcomes – with our hybrid learning course, How To Measure Anything: Principles of Applied Information Economics. | <urn:uuid:2e68035d-c651-409a-8bf7-20b2af0292a8> | CC-MAIN-2022-40 | https://hubbardresearch.com/trojan-horse-how-a-phenomenon-called-measurement-inversion-can-massively-cost-your-company/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00591.warc.gz | en | 0.951023 | 1,418 | 2.625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.