text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Welcome to the tutorial about SAP Work Center Capacity. This tutorial is part of the SAP PP training course. You will learn what is capacity in SAP PP, how to use capacity categories, the difference between available and required capacity, intervals of available capacity, usage of shifts in capacities and individual capacities. Also, we will discuss different units of measure for capacities and cumulation of capacities in hierarchies of work centers. What is Capacity? Capacity means capability to do certain work. In SAP, capacity is closely related with work centers, i.e. capability of a work center/machine to do certain operations. In general, capacity will be measured in “Time” unit. We can also use other units of measure depending on production business processes. Role of Capacity in SAP PP In SAP Production Planning, capacity data in work center is used for calculation of: - Required capacity for an order or an operation - Available capacity of a work center Integration with Other Business Functions SAP work centers are integrated with the following business functions (SAP modules): - We can assign an HR object to a work center which will be helpful for determining qualification of an employee. - For example, certain operations might need special skillsets like CNC Operator/programmer. We can assign this information to the work center. - When a particular machine/work center is scheduled in PM order for maintenance activities, its capacity will be locked for production uses. SAP Capacity Category SAP capacity category is used to distinguish various capacities within a work center. If you want to maintain different capacities for labor and machine in a work center, we can define it by means of capacity categories: - Machine Capacity - Labor Capacity Consider a Lathe machine which requires 3 labor units to perform the operations and it is available 9 hours per day. In this case, we have the following capacities: |Capacity Category||Available hours||No of Capacity||Total hours| SAP Work Center Capacity In general, capacity means available hours of a machine or labor. These available hours excludes breaks or halt hours. Assume that a Lathe machine is available in a production plant from 9 AM to 9 PM and there will be a break time of 2 hours for lunch and evening snacks. The capacity of the Lathe machine is calculated as follows: Available Capacity = 12 Hours (Total hours) – 2 Hours (Break) Available Capacity = 10 Hours Required capacity is hours required for a work center to produce a particular quantity of an assembly. The SAP system uses the following information to calculate required capacity for a work center: - Capacity formula. It uses standard value key parameters (Basic tab of work center master data screen). - Operation time in a production order. It uses time defined in routing operation for the base quantity. A process in a steel cutting unit uses a machine called Plasma cutting with the following standard value parameters: - Setup time - Machine time - Labor time Let’s say in routing we have maintained the following times for the above parameters: - Base Quantity: 120 PCs - Setup time: 30 Mins - Machine time: 90 Mins - Labor time: 90 mins Capacity formula defined for this machine is as follows: Machine time * Quantity to be produced / Base quantity When a production order for the quantity of 300 PCs is created, capacity is calculated as follows: Required Capacity = 90 mins * 300 PCs / 120 PCs Required Capacity = 225 mins It is also possible to have several similar machines to increase the capacity. SAP has a field called “Number of Individual Capacity”. This is number of individual machines available with similar capacity. When the SAP system calculates required capacity, it divides the amount to the number of individual capacities in formula. When the SAP system calculates available capacity it multiplies the amount to the number of individual capacities. Capacity formula defined for the machine with 2 individual capacities is as follows: Machine time * Quantity to be produced / Base quantity / No of splits Required capacity = (90 mins * 300 PCs / 120 PCs) / 2 Required capacity = 112.5 mins Work centers which are having same capacity type can be assigned with a pooled capacity. This capacity is defined globally and assigned to the all work centers. For example, let say you have 7 lathe machines with similar capacity. Creation of Pooled Capacity Pooled capacity can be created by accessing following menu path. Assignment of Pooled Capacity Once a pooled capacity is created (in transaction CR11), it can be assigned to a work center in transactions CR01/CR02 in Capacity tab. Interval of Available Capacity When standard available capacity of a machine needs to be replaced for certain periods, we can use interval available capacity. For example, a work center which has the standard available capacity of 8 hours per day but from 20 June to 28 June due to higher demand a company wants to run this machine for 16 hours per day. In this case we can define the interval of capacity for these period as 16 hours. Create Interval of Available Capacity To create an interval of available capacity, go to the header part of a capacity (transaction CR11). Then, click on button. Next, click on button. The system will add a new line for the interval. Now press Enter button on the keyboard. Here, you can find that during the period from 20 June to 28 June (interval of capacity) available capacity is 16 hours, whereas for rest of the days it is 8 hours (standard available capacity). Shift Definitions, Shift Sequences, and Breaks Capacity and planning calculations can also take into account shifts. Let’s see how to define shifts and breaks in SAP. The following parameters are maintained in definition of a shift: - Validity period of a shift (start date and end date) - Start time and end time A company operate with 2 shifts: - “A” Shift —- 06 AM to 14 PM - “B” Shift —- 14 PM to 22 PM Shift definition will be maintained as show on the picture below. Shift definition in SAP is part of customizing activity which needs to be performed in the transaction OP4A. Next, we should define the sequence of our two shifts. The sequences of the shifts are defined in the same transaction as shown on the picture below. Additionally, we can define break times for each shift and assign them in shift definition later. Use of Shift Sequences in Work Center Shift timings and breaks will be used at the time of capacity and scheduling calculations. By default, the SAP system schedules the break time to be distributed equally over the working hours. When we want the system to uses exact break times defined in the shift the indicator “Scheduling with Break” needs to be maintained in the production order type. Note: this topic will be discussed in more details in the tutorial about production orders. Individual capacity in SAP is used for detailed capacity requirement planning. This can be explained with the following example: - Let’s say you have Lathe machines A and B - Machine A is available from 06:00:00 to 17:00:00 hours - Whereas Machine B is available from 06:00:00 to 22:00:00 hours - A possible way of mapping in SAP for this scenario is: - Create two work centers with different capacities - Create only one work center but two individual capacities need to be created inside the work center capacity To see how it works in SAP, go to the transaction CR11. Select each item in the list and click on button. The SAP system will show individual capacities for machines A and B. When creating a production order it will be necessary to assign the individual capacity that we are going to use to perform the operation. Capacity in Different Unit of Measure In general, capacity is defined in the unit “Hour”. But there may be a case when a different unit of measure (UOM) can also be maintained. UOM may be Cubic Meter (M3) or KG or TON, etc. For an example, let’s consider the casting industry where casting is formed from molten metal from furnace. Here furnace capacity can also be defined in “TON”. But the relation between TONs and Hours must be maintained in the work center capacity. This will be used to evaluate the capacity in TONs. Capacity and Work Center Hierarchy Capacity can be cumulated by means of work center hierarchy. Hierarchy of work center enables cumulation of available capacities from child work centers to their superior work center. There are two types of cumulation available in SAP: - Dynamic Cumulation - Static Cumulation Dynamic cumulation is used to cumulate both available capacity and required capacity, whereas static cumulation cumulates available capacity of a hierarchy of work centers. Static Cumulation and Period Pattern Key In static cumulation, available capacity cumulation is calculated using Period pattern key. Period pattern key is used for: - Cumulating available capacity - Displaying available capacity In static cumulation, cumulated available capacity of a superior node will not be changed automatically when there is a change in the available capacity of a subordinate work center. So, we must cumulate available capacity periodically. Definition of Period pattern key is part of customizing activity using transaction OP11. Let’s image that you want to cumulate available capacity as three segments on the time axis. The first segment, cumulate next 7 days of available capacity. The second segment, cumulate next 3 weeks from last period of the segment 1. The third segment, cumulate 1 month of available capacity from the last period of the segment 2. Work center Hierarchy for the above diagram is as shown on the picture below. Here, available capacity of work center A and B are cumulated in a higher hierarchy. That is, cumulated available capacity is displayed in available capacity of the work center C. Creation of a period pattern key is part of customizing activity in the transaction OP11. Periods in a period pattern key may be work days, calendar days, weeks, months or years. Let’s say that available capacity of SAP work center A is 12 Hours and work center B is 18 hours. Then, available capacity of work center C will be calculated as 30 hours. If period pattern key is defined as a week, then available capacity for a week will be cumulated. In dynamic cumulation, the SAP system cumulates the available capacities and the capacity requirements for all capacity categories of the work centers specified in a hierarchy. This is defined in Option Profile which is part of customizing activity for the case of available capacity cumulation (transaction OPA3). This is defined in Evaluation Profile which is part of customizing activity for the case of required capacity cumulation (transaction OPD0). Let’s consider the same example given for the static cumulation. When Lathe machines are loaded with capacity by order (say 4 hours each), dynamic capacity cumulation cumulates this required capacity (4 hours + 4 hours) into the work center hierarchy C as 8 hours. This will be discussed in more details in capacity requirement planning tutorial. Indicators Influencing Capacity Planning Finally, let’s discuss several important checkboxes in SAP work center capacity master data (transaction CR11). Relevant to finite scheduling indicator When this Indicator is activated the SAP system considers the existing capacity load while calculating available capacity. Let’s say Machine ‘A’ which has available capacity of 8 hours was scheduled for an order on 10 July for 4 hours 30 mins. When a new order is created for this machine on same date, the system will propose only remaining hours (3 hours 30 mins). Can be used by several operations indicator When this Indicator is activated, that particular work center capacity can be utilized by more than one operation. This allows the SAP system to consume the available capacity for other production orders. For example, let’s say you defined a storage tank as a work center in the chemical industry. When two production orders try to use this storage tank to fill the finished chemical, the SAP system will only allow this when this indicator is activated. Long term planning When this Indicator is activated, this work center capacity can also be used to calculate capacity requirement of long term planning. Note: long term planning in SAP is also called simulation MRP where we can upload demand simulation to calculate required raw materials and capacity for an assembly/subassembly. Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully it’s something we can address for you in improvement of our free SAP PP tutorials.
<urn:uuid:a96755cf-a7af-4483-ad50-79a7f6928d08>
CC-MAIN-2022-40
https://erproof.com/pp/sap-pp-training/sap-work-center-capacity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00289.warc.gz
en
0.874752
2,761
2.765625
3
For today’s tech blog we’re going to discuss open source software. What is it, and how is it different from other types of software? In simple terms, open source software is a term for any program whose source code has been publicly released, allowing for others to modify it however they like. It’s easiest to understand open source software in the context of other types of software, so here’s a refresher in the other major types. Licensed software is any computer application that requires licensing to be used. This licensing can be as simple as purchasing a digital download code for a single computer, or it can be a complex, enterprise-wide licensing agreement. Nearly all licensed software must be purchased to be used legally. Microsoft Office has long been an example of licensed software. Cloud or Subscription-Based Software Technically a subset of licensed software, cloud or subscription-based software requires a recurring subscription-style payment. If your payment lapses, the software may become inoperable. Examples here include Adobe Creative Cloud and Microsoft’s newer offering, Office 365. Freeware is a broad term for software that’s distributed freely, with no expectation of payment for personal use. Install these at home as much as you like. Beware that some freeware is only free for personal use, though. If you want to use it in a corporate setting, you may need a license. Open Source Software Open source software takes the concept of freeware to the next level. To be considered open source, both the software and its source code must be freely available. Users are permitted (and even encouraged) to modify the source code to improve the software or to customize it for their own needs. Open source as discussed above is a concept or philosophy. Developers who wish to release open source software with a sort of seal of approval can do so through the Open Source Initiative. This group offers a certification mark, Open Source (yes, it’s just the term we’ve been discussing, but with capital letters), which verifies that a piece of software meets certain qualifications. To receive the Open Source designation, a piece of software must meet these two criteria. Additionally, provision is made so that the original creators can demand that future, customized versions of the software are clearly distinguished from the original, through naming or versioning. By reading this tech blog post, you now understand what open source software is. If you’re wondering what it can do for your or your business, contact us today. We’re glad to help!
<urn:uuid:58018bf4-56c8-4bf9-9ac6-59f3ac10ba96>
CC-MAIN-2022-40
https://www.infiniwiz.com/what-is-open-source-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00289.warc.gz
en
0.920241
540
3.03125
3
MT Energy Savings Charts introduces a new way for a user to understand the current temperature/humidity status on an aggregate view for across all sensors in a particular network. There are 2 places where a user can see an Energy Saving Chart. Environmental Overview Page - Shows collected data of the current metrics across all MT10 Sensor Details Page - Plots the data point of the latest metric on an individual basis Basics of Energy Savings Chart The Energy Savings Chart is based off of a psychrometric chart. Psychrometric charts plot wet bulb and dry bulb data for air-water vapor mixtures at atmospheric pressure. Here’s an example of a simple psychrometric chart with the assumption of standard air pressure at sea level. Explaining the Chart The x-axis is dry bulb temperature(in °C or °F ). The y-axis is the humidity ratio(in gram moisture/kg dry air or grain* moisture/lb dry air) The curved line is a relationship between dry bulb temperature and humidity ratio is plotted for air at various relative humidity levels: 20%, 40%, 60%, 80%, and 100%. For the purpose of the Energy Savings chart we will be excluding the Wet Bulb temperature as it doesn’t affect the energy envelope. *Grain is an imperial unit and 1 pound = 7000 grain. This unit is chosen for better scalability of the chart. Dry Bulb temperature - The dry-bulb temperature is the temperature of air measured by a thermometer freely exposed to the air, but shielded from radiation and moisture. MT10 records dry bulb temperature Wet Bulb Temperature - The wet-bulb temperature is defined as the temperature of a parcel of air cooled to saturation (100% relative humidity) by the evaporation of water into it, with the latent heat supplied by the parcel. For the purpose of this feature, wet-bulb temperature is ignored for the sake of simplicity. Humidity Ratio - Humidity ratio is the ratio of weight of moisture to the weight of dry air in the air–vapor mixture. Its unit is gram vapor per KG dry air. Psychrometric chart - The most basic definition of psychrometric chart is that it plots wet bulb and dry bulb data for air-water vapor mixtures at a constant atmospheric pressure. Reading the Chart With the MT10 Temperature/Humidity Sensor, the dashboard collects a data point for Dry Bulb temperature and the relative humidity of the area. Based on that, the air’s mixture ratio can be determined. Example: Suppose an MT10 is reporting a temperature of 70° F or 21° C, and 40% humidity. Find the 40% relative humidity line, then follow it along to 70° F on the x-axis, and plot a point there. The y-value of that point is the mixture ratio (about 6.29 gm vapor/kg dry air). Understanding the Envelopes Along with the data points from all sensors across the network, the Energy Savings chart also has 4 envelopes defined. These envelopes are based on the recommendation of ASHRAE standards for maintaining the temperature and humidity conditions of different types of Data Centers. Below are the 4 recommended ranges: A1[15-32°C, 20-80% RH] - Typically a data center with tightly controlled environmental parameters A2[10-35°C, 20-80% RH] - Typically an information technology space with some control of environmental parameters A3[5-40°C, 8-85% RH] - Typically a storage room with backup drives A4[4-45°C, 8-90% RH] - Least environmentally sensitive areas. Customizing the Chart and Envelopes The Energy Savings Chart has a default setting to show all MT10 sensors across the network and the Envelope defaults to A2 type. The dashboard also allows users to customize the chart using the Chart settings button under the 3 dot menu of the chart The following settings can be customized by the user Visible Tags - The user has the ability to only show certain sets of sensors using the Tags option. For example, in a DC all racks have an MT10 near the Inlet of the rack. Scoping the Visible Tag to only those sensors via Tags can give a user a good idea of how warm/dry the air is in the hot aisle of the racks. Energy Saving Envelope - Here the user can change the envelope as explained in the section above. Selecting the appropriate envelope can allow users to get an allowable range for controlling the environmental condition of the area. Now once a datapoint from an MT sensor is plotted on the Energy Savings chart, the dashboard also determines a Conformance score which is essentially a percentage of sensors adhering to the range set by the envelope. For example, out of 100 sensors deployed in an IDF, if 2 of the sensors are recording unusually warm or cold temperatures, the Dashboard will mark those 3 sensors are Non Conforming and the Conformance Score will drop to 98% Clicking on the Non Conformance button will navigate to a predefined filter of the Sensors that are non conforming. Using this data, a user can determine exactly which sensors are out of the required variance at a glance.
<urn:uuid:bb02867c-788c-4152-a68b-b73c9a7f815d>
CC-MAIN-2022-40
https://documentation.meraki.com/MT/MT_General_Articles/MT_Energy_Savings_Chart
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00289.warc.gz
en
0.865886
1,144
3
3
One Tool Example Buffer has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer. Use Buffer to take any polygon or polyline spatial object and expand or contract its extents by a user-specified value. This image shows a positive buffer of 1 mile and a negative buffer of -1 mile. Configure the Tool - Specify the Spatial Field that contains the spatial object to be buffered. - Include in Output: The user can choose whether or not to include the original spatial object in the output stream. The default is unchecked so the object is not included. - Generalize to 1% of Buffer size: Selected by default. This optimizes speed by cutting down the number of nodes in the buffered object. - Set the BufferSize parameters. - Specific Value: Input a numerical value and all records are buffered to this size - From Field: Select a numerical field in the connection stream and each record is buffered by the value of that field. - Units: Choose to buffer the object in units of Miles or Kilometers.
<urn:uuid:f767ff87-623e-4e0e-b6db-5691c7ec68cb>
CC-MAIN-2022-40
https://help.alteryx.com/20221/designer/buffer-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00289.warc.gz
en
0.822326
240
2.65625
3
Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. Virtualization helps you get the most value from previous investments. Virtualizing resources lets administrators pool their physical resources, so their hardware can truly be commoditized. So the legacy infrastructure that’s expensive to maintain, but supports important apps, can be virtualized for optimal use. Administrators no longer have to wait for every app to be certified on new hardware; just set up the environment, migrate the VM, and everything works as before. During regression tests, a testbed can be created or copied easily, eliminating the need for dedicated testing hardware or redundant development servers. With the right training and knowledge, these environments can be further optimized to gain greater capabilities and density. Virtualization is an elegant solution to many common security problems. In environments where security policies require systems separated by a firewall, those 2 systems could safely reside on the same physical box. In a development environment, each developer can have their own sandbox, immune from another developer’s rogue or runaway code.
<urn:uuid:df62979e-d253-462f-9a18-f7d9d37142c6>
CC-MAIN-2022-40
https://itgroupinc.asia/red-hat-for-cloud-computing/redhat-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00489.warc.gz
en
0.924343
283
3.15625
3
Edge computing has already begun to make its mark on a variety of different industries across the planet. From manufacturing and distribution to healthcare and retail services, there seems to be few places left in which the presence of some form of edge computing cannot be felt. One of the areas in which edge computing is beginning to transform the landscape is farming and agriculture. As we move into an age in which technology is becoming increasingly interconnected with both our infrastructure and our work and personal lives, there are various reasons as to why the adoption of new, cutting edge innovations such as edge computing, automation and artificial intelligence technologies. When you consider some of the challenges we face today, and those we will face in the not so distant future, utilizing technologies such as these in an industry as essential to us all as farming and agriculture makes perfect sense. In this article, we’ll be looking at five edge computing use cases within farming and agriculture and detailing how they work and what potential benefits they could bring. All of the scenarios mentioned will be taken from real-world examples that are actively being developed or deployed within an agricultural or smart farming environment. 1) Video Analytics Smart farming all about collecting the right data and using it to optimize your resource-planning and operation. Artificial Vision technologies using artificial intelligence are making waves in the agriculture sector just like they are in any other. Smart Agriculture is a major use case for vision-based automation and data analytics applications, driven by drones and vision based harvesting, weeding and so on. Deploying compute capabilities using ventless industrial PC’s at the edge allows for on-site analytics and quick access to graphic-heavy data and analytics. Besides edge computing devices, deployment of 4G/LTE/5G connectivity is another key factor in evolution of high-resolution visual data collection in remote environment. Industrial Communication Gateways with integrated LTE connectivity offer a great onsite compute platform along with cellular communication with cloud. 2) Environmental Monitoring One of the biggest advantages that edge computing has brought to farming and agriculture over the past few years is the ability to remotely monitor different aspects of a farm’s agricultural operations. Networks of sensors, ranging from several sporadically placed sensors to thousands of connected devices monitoring aspects such as soil, weather and humidity and temperature conditions as well as acidity and pH levels. Edge computing allows for the data necessary in delivering such solutions to be generated and collected much closer to the source while also enabling some data processing operations to be performed in the edge devices themselves. Allowing farmers and agricultural workers detailed insights about their operational environments is a big selling point with any new technology, and edge computing is no different. While neither robotics technologies or artificial intelligence systems have quite given us the android companions science fiction has predicted for so long, we are now getting to a point in time where such robotic iterations don’t seem so much like science fiction anymore and appear to us more as feasible possibilities. While robotics has been an active part of many industries for decades now, car manufacturing, for example, it has also been slow to gain traction elsewhere. With the help of edge computing and IoT device networks, this is no longer the case. Within agriculture, the recent emergence of smart farming frameworks has promoted the use of edge computing and Internet of Things (IoT) systems devices, and laid the groundwork to enable IoT-driven robotics to be developed for smart farms. These systems are capable of being programmed to perform automated tasks such as picking vegetables as well as spraying plants and crops, with more applications currently being developed. The use of virtualization, automation, and Big Data have been the key drivers for what has become known as the fourth industrial revolution, and so it is no surprise that automation is playing a growing role in many smart farming and agricultural enterprises. As previously mentioned, automated robotics, as well as soil injections, heaters, and lighting can all be utilised through the use of edge computing and automation systems. When coupled with the various remote monitoring applications and software now available to smart farms and agricultural enterprises, automation can become a powerful tool to farmers. The many benefits of automation continue to grow as do its applications within smart farming and, with systems such as 5G wireless networks and powerful machine learning algorithms also currently being developed, it seems automation’s role in smart farming and agriculture is just beginning. 5) Vertical Farming We as a species live on a planet of finite resources, and one of those finite resources is farmable land. Soil degradation is a growing problem around the world and the amount of farmable land we have is, unfortunately, slowly decreasing. In response to this growing crisis, scientists and farmers from around the world have developed what has become known as vertical farming. Vertical farming involves using the data collected from a network of IoT sensors and devices to optimize the growing of food and plants without the need for farm land. In vertical farms for example, moisture levels are controlled with a network of sensors that constantly monitor a mist that surrounds the plant. Using edge computing, much of the data processing involved in such operations can be done on the edge devices themselves, without the need to be sent to the cloud, further adding to the benefits of such systems within farming environments. While we continue to move into the Fourth Industrial Revolution, analytics technologies, automation, 5G, and AI and machine learning technologies will all begin to become more prominent and integrated into smart farms and agriculture. As we’ve seen over the course of this article, this will be in large part thanks to the foundations laid by edge computing.
<urn:uuid:cd8fa4c4-17f3-4e57-8683-01995701c947>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/5-edge-computing-use-cases-smart-farming-agriculture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00489.warc.gz
en
0.948073
1,142
2.859375
3
DDR4 memory delivers up to 1.5x performance improvement over DDR3, while reducing power by 25% on the memory interface. The current generation of DDR4 memory deployed in datacenters runs at 2.4Gbps, although 3.2 Gbps silicon is expected to start shipping later this year (2016). As Bob O’Donnel of TECHnalysis Research notes, the considerable increase in performance presents real challenges for the industry. More specifically, the shift to higher speeds degrades electrical signal integrity, especially with multiple modules added to a system. In practical terms, this means it’s becoming harder to achieve higher capacities at more advanced speeds. In order to overcome this limitation, says O’Donnel, memory designers use specialized clocks and dedicated memory buffer chips integrated onto the DIMMs. “These server memory buffer chipsets play a critical role in high-speed DDR4 designs,” he explained. “They allow server designers to maintain the high-speeds that DDR4 offers, while also enabling the higher capacity designs that today’s [Big Data] applications require.” According to O’Donnel, there are currently two categories of modern server DDR4 DIMMs. In a Registered DIMM (RDIMM), a Register Clock Driver (RCD) chip delivers a single load for the clock and command/address signals for the entire DIMM onto the data bus that connects between memory and the CPU. This technique facilitates a reduced impact on signal integrity versus an unbuffered DIMM, where all of the individual DRAM chips place multiple loads on clock and command signals. On a Load Reduced DIMM (LRDIMM), each individual DRAM chip is equipped with an associated Data Buffer (DB) chip—in addition to the RCD on the module—to reduce the effective load on the data bus, enabling the use of higher capacity DRAMs. The combination of the RCD and individual DBs constitute a complete server DIMM chipset. “With the server DIMM chipset enabled, data is not actually sent straight to the CPU from memory, just as gasoline isn’t sent straight to a car’s engine from its fuel tank,” said O’Donnel. “A properly designed fuel injection system sends gas to the engine in exactly the right form, quantity and speed that it requires and, in an analogous way, memory buffers serve to regulate the delivery of raw data from memory into and out of the CPU.” As expected, the location of data buffers on DDR4 LRDIMMs also play a vital role in helping to achieve faster performance over DDR3 LRDIMMs. “The [major] benefit is the reduced trace distance from each DRAM module to the memory bus and memory controller,” he explained. “While DDR3 LRDIMMs have a single centralized memory buffer that forces data to cross the distance of the DIMM module and back, DDR4 LRDIMMs have dedicated memory buffer chips located a very short trace line away from the data bus.” Real world benefits include time savings that can be measured in nanoseconds, as well as optimized signal integrity facilitated by shorter trace lines. Both translate into improved real-world performance for time-sensitive applications. Performance is particularly important for today’s Cloud-based services, advanced analytics tools and other Big Data applications. Indeed, all are driving a higher set of expectations for servers. “Throw in the looming prospect and opportunity of the Internet of Things (IoT) and the stage is set for a very challenging environment in today’s and tomorrow’s data centers and enterprise servers,” O’Donnel added. “Many of these new applications leverage very large in-memory databases to meet the performance expectations of today’s increasingly connected, mobile world.” To be sure, microseconds count when providing real-time analytics on millions of financial transactions or offering real-time language translation via Cloud-based services. This is precisely why Rambus’ DDR4 chipsets for RDIMM and LRDIMM server modules are designed to deliver top-of-the- line performance and capacity needed to meet the growing demands placed on enterprise and data center systems. More specifically, our JEDEC-compliant DDR4 chipsets feature industry-leading I/O performance and margin, while utilizing advanced power management techniques. Additional key DDR4 server DIMM chipset features include support for DDR4 up to 2666 Mbps, multi-setting frequency-based power optimization, an operating temperature range of -5° C – 125°, full ROHS compliance and improved ESD/EOS beyond JEDEC requirements. Interested in learning more about Rambus’ server DIMM chipsets? You can check out our official product page here.
<urn:uuid:04bab947-e6a0-4720-9000-9fa0c8aabb2b>
CC-MAIN-2022-40
https://www.nextplatform.com/micro-site-content/optimizing-ddr4-with-server-dimm-chipsets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00489.warc.gz
en
0.90861
1,020
2.640625
3
What is phishing? Phishing attacks are counterfeit communications that appear to come from a trustworthy source but which can compromise all types of data sources. Attacks can facilitate access to your online accounts and personal data, obtain permissions to modify and compromise connected systems–such as point of sale terminals and order processing systems–and in some cases hijack entire computer networks until a ransom fee is delivered. Sometimes hackers are satisfied with getting your personal data and credit card information for financial gain. In other cases, phishing emails are sent to gather employee login information or other details for use in more malicious attacks against a few individuals or a specific company. Phishing is a type of cyber attack that everyone should learn about in order to protect themselves and ensure email security throughout an organization. How does phishing work? Phishing starts with a fraudulent email or other communication designed to lure a victim. The message is made to look as though it comes from a trusted sender. If it fools the victim, he or she is coaxed into providing confidential information–often on a scam website. Sometimes malware is also downloaded onto the target’s computer. Cybercriminals start by identifying a group of individuals they want to target. Then they create email and text messages that appear to be legitimate but actually contain dangerous links, attachments, or lures that trick their targets into taking an unknown, risky action. In brief: - Phishers frequently use emotions like fear, curiosity, urgency, and greed to compel recipients to open attachments or click on links. - Phishing attacks are designed to appear to come from legitimate companies and individuals. - Cybercriminals are continuously innovating and becoming more and more sophisticated. - It only takes one successful phishing attack to compromise your network and steal your data, which is why it is always important to Think Before You Click.
<urn:uuid:8dea8eb5-f1fc-449b-ab96-8fb3f4b91604>
CC-MAIN-2022-40
https://cybercoastal.com/cybersecurity-brief-guide-for-beginners-phishing-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00489.warc.gz
en
0.93817
376
3.734375
4
Washington College And Iron Mountain Preserve Stories From The WWII Home Front On the 71st anniversary of the end of World War II, exclusive StoryQuest project gives access to oral histories of the home front As a young child in 1945, the daughter of a Maryland farmer, Virginia Mulligan lived a life far removed from the battlefields of World War II. But then one day, four men arrived: German prisoners of war assigned to work on the farm. Suddenly, the war was literally in the Mulligans’ back yard in Worton, MD, as the POWs helped the family bale hay, shuck corn, and chop wood, while Virginia’s mother gave them meals. Virginia’s older brother, who was serving in the military as a bombardier, returned from the war to discover his relatives on the home front living amicably alongside the “enemy.” This is just one of the dozens of stories captured through the StoryQuest Project at Washington College in Chestertown, MD. An oral history program led by undergraduate students at the college’s C.V. Starr Center for the Study of the American Experience, the project’s mission is to document and present stories from the home front through hundreds of recorded interviews. StoryQuest preserves the often overlooked memories of the millions of men, women and children who — although they may not have served in uniform — were profoundly impacted by World War II. The stories are being made available online through a partnership between Washington College and Iron Mountain Incorporated® (NYSE: IRM), the global leader in storage and information management services. With 2016 marking the 75th anniversary of the United States’s entry into World War II, the StoryQuest Project is bringing to life the experiences of wartime civilians whose stories may not be as well-known as those of veterans. From an electrician working to transform a Navy gunboat into Harry Truman's presidential yacht, a college physics major joining the Manhattan Project, or a young woman recruiting migrant workers from the South for a local munitions plant, the Storyquest Project will introduce future generations to a completely different side of the struggle against fascism — one that took place alongside the heroic deeds of soldiers half a world away. “The StoryQuest interviewers are all Washington College undergraduates, which has been a key to its success,” said historian Adam Goodheart, the Starr Center’s director. “Some of the people we interview have grown accustomed to thinking that their homefront memories, unlike veterans' combat experiences, aren't really ‘history,’ But there's something special that happens when a 19-year-old sits down with a 90-year-old: the older person feels that they're passing their life stories to a new generation, and they're talking to someone who's around the same age that they were in the 1940s. We’re very grateful for Iron Mountain’s generous support — which is not only preserving essential history, but also training the next generation of oral historians and digital archivists." In addition to capturing those first-hand accounts of experiences from the home front, the students have collected digital images of hundreds of wartime letters, photographs, and other artifacts that help tell the interviewees' stories. They hold "Scan-a-Thon" days where members of the public can share both documents and family stories. For instance, a woman walked in with an iron box overflowing with wartime letters from her uncle — ending with a government telegram informing the family that he had been killed in combat in North Africa. Through a combination of financial support and in-kind services, Iron Mountain is helping Washington College digitally preserve these stories of everyday Americans during the greatest global conflict in human history. “Iron Mountain’s philanthropic mission is focused on preserving all manner of cultural and historical experiences in order to document and celebrate our shared human history for future generations,” says Ty Ondatje, senior vice president for Corporate Responsibility and chief diversity officer, Iron Mountain. “Our Living Legacy Initiative gives us the opportunity to extend that focus to organizations that share that mission, like Washington College’s C.V. Starr Center for the Study of the American Experience and the StoryQuest Project. We’re thrilled to partner with Washington College to lend our expertise and help preserve these narratives so that future generations can experience these stories.” These stories, in the form of audio clips, transcripts, images, and other materials are available on the Washington College StoryQuest website at http://www.storyquestproject.com About Washington College Founded in 1782, Washington College in Chestertown, Md., was the first institution of higher learning established in the new republic. George Washington was a principal donor to the college and a member of its original board of trustees. He received an honorary degree from the college in June 1789. The College’s C.V. Starr Center for the Study of the American Experience, which administers the George Washington Prize, is an innovative center for the study of history, culture, and politics, and fosters excellence in the art of written history through fellowships, prizes, and student programs. www.washcoll.edu. The C.V. Starr Center for the Study of the American Experience is dedicated to fostering innovative approaches to the American past and present. Through educational programs, scholarship and public outreach, and a special focus on written history, the Starr Center seeks to bridge the divide between the academic world and the public at large.
<urn:uuid:e61458d1-c601-4faa-9524-09f511010531>
CC-MAIN-2022-40
https://www.ironmountain.com/about-us/newsroom/press-releases/2016/august/washington-college-and-iron-mountain-preserve-stories-from-the-wwii-home-front
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00489.warc.gz
en
0.947455
1,141
2.875
3
Automated contact tracing — a tool that could potentially help blunt the impact of the next wave of the coronavirus pandemic as well as future outbreaks — has been largely sidelined due to privacy concerns and citizens' lack of trust in both government agencies and technology companies, according to a variety of experts. Only 21% of people would willingly share data with healthcare businesses for contact-tracing purposes, and more than half continue to feel uncomfortable sharing personal data for any reason, according to the "2020 Consumer Trust and Data Privacy report" published this week by enterprise privacy firm Privitar. Because automated contact tracing requires significant market penetration to be effective, the absence of privacy protections and the lack of trust means the technology will likely not be adopted quickly enough to be a factor in the current pandemic. To gain citizens' trust, the technologies and policies surrounding those technologies must protect privacy and be totally transparent in how data is collected and used, says Guy Cohen, head of policy for Privitar. "If we want to take advantage of tools like contact-tracing apps, we need to make sure those tools work and are trustworthy — otherwise they won't be adopted," he says. "We need evidence of value and trustworthy data management needs to be both perception and reality." A failure to trust the technology is not the only challenge for contract-tracing applications. False positives — identifying a person as a potential transmission risk — could be a significant issue, as the technologies used to determine proximity — Wi-Fi and Bluetooth — do not take detect a variety of environmental factors, such as whether people are indoors or outside, whether they are talking with one another or facing away from each other, and whether they have donned masks. Using such technology without finding ways to resolve those issues could result in so many failures that people will lose even more confidence in the applications, says Casey Ellis, chief technology officer and founder of crowdsourced vulnerability assessment firm Bugcrowd. "The reality is that COVID-19 contact-tracing apps are uncharted territory, and developers are requiring users' devices to use location-based and Bluetooth communication in ways they weren't designed to do," he says. "Additionally, developers are pressured to bring these apps to market faster than what is recommended since we are in the middle of the pandemic still, and this leaves room for error." Contact tracing is a natural approach to attempting to track down people who have been potentially been exposed to a virus or a disease. In the past, legions of workers have taken on the task after a report of an infected person. Automating contact tracing promises to increase population coverage, speed up the process, and reduce the cost by allowing — or requiring — people to install an application that tracks which mobile devices have been in close proximity. While the technology seems like a smart use of an already ubiquitous technology — people's mobile devices — automated contact tracing raises a passel of thorny issues. Those most at risk — older people — are least likely to download a contact tracing app, for example, and even distributed contact tracing opens the risk to malicious attacks, such as bad actors reporting a COVID-19 infection in an area to reduce voting participation or shut down businesses, according to three experts who wrote for the Brookings Institution about the challenges facing the technology. "We have no doubts that the developers of contact-tracing apps and related technologies are well-intentioned, [b]ut we urge the developers of these systems to step up and acknowledge the limitations of those technologies before they are widely adopted," the three researchers said. "Health agencies and policymakers should not over-rely on these apps and, regardless, should make clear rules to head off the threat to privacy, equity, and liberty by imposing appropriate safeguards." Because contact tracing relies on trust, the current polarization of US politics has made gaining the trust of a third of Americans that much more difficult, according to Privitar's research.Trust requires that two conditions be met, says Privitar's Cohen: One, any app has do its job effectively, and, two, privacy must be protected. Without such transparency, adoption of contact tracing will not pass the threshold that will make it effective, he says. Stronger federal laws protecting privacy could help make future efforts more likely. However, while Democrats and Republicans have both proposed legislation, they have failed to agree on key provisions, such as whether state laws — such as the California Consumer Privacy Act — can be more stringent than a federal law, as well as the ability of citizens to bring legal action against offenders. Until those fundamental issues are resolved, privacy protections are unlikely to pass through Congress, Cohen says. "Key disagreements ... [have] blocked progress so far and make it unlikely that the new proposals will pass," he says. "In the interim, America is left lacking any federal standard, and [that is] driving state-level action."
<urn:uuid:9df02255-7b81-4706-ab90-6290579beded>
CC-MAIN-2022-40
https://www.darkreading.com/application-security/data-privacy-concerns-lack-of-trust-foil-automated-contact-tracing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00489.warc.gz
en
0.956104
1,000
2.59375
3
Email spoofing is a common tactic hackers use in phishing and social engineering attacks. Spoofing trends tend to increase around popular shopping holidays in the U.S., including Black Friday and Amazon Prime Day, and the recent LinkedIn data scrape has already led to an uptick in spoofing attempts. With these threats in mind, it’s important to understand how spoofing works and what you can do to protect yourself, your employees, and your business from falling victim to a spoofing attack. - What is email spoofing? - How to identify a spoofed email - How to prevent email spoofing in 2021 - Email spoofing is a constantly evolving threat What is email spoofing? Email spoofing is a technique involving seemingly innocuous emails that appear to be from a legitimate sender. Spoofers forge or manipulate email metadata like the display name and email address to make the intended recipient believe they are real. Sometimes spoofers can create a legitimate-looking email address by changing only one or two letters in a business name, like “Arnazon” instead of “Amazon” or other letter swaps that take a close eye to spot. Hackers frequently use email spoofing in tandem with other social engineering techniques to impersonate an official source, whether it’s a colleague, partner, or competitor. These strategies attempt to manipulate the emotions of the intended target; sometimes this is accomplished by creating a false sense of urgency around a fictional problem or preying on the victim’s compassion. Social engineering tactics usually include spear phishing or whaling. Spear phishing attacks target specific victims with malicious links or attachments in the body of a spoofed email. When a hacker’s intended target clicks on the link or attachment, it launches a malware attack before the victim can do anything to stop it. Similarly, whaling tactics attempt to convince C-suite executives to take a specific action (like clicking a link or attachment) or divulge confidential information about the business. When combined with these tactics, a successful spoofing attempt can have dramatic consequences. Also read: Top Secure Email Gateway Solutions How to identify a spoofed email How can we discern a spoofed email from a legitimate one? Consider the email in the screenshot below: First, the subject line, sender email address, and footer are all indicators that the email is illegitimate. If it was a real email from Sam’s Club, the subject line should be free of errors and strange formatting. The domain of the sender email address would be samsclub.com or some variation thereof instead of blackboardninja.com, and the mailing address in the footer would be the Sam’s Club headquarters address instead of an address in Las Vegas. Additionally, the body content of the email has a vague call to action—what is the “Loyalty Program” and what must one do to earn the so-called “prize”? Unless the recipient is expecting an email like this, there is very little context that indicates where the link leads. This is typical of phishing emails. Finally, the fact that Gmail automatically categorized this email as Spam doesn’t necessarily mean it’s a spoof, but it’s definitely a red flag. Spam filters can be overzealous at times, which is why important emails like order confirmations and shipping updates sometimes end up in the wrong folder. However, the purpose of a spam filter is to prevent gullible recipients from falling into a spoofed email’s traps. How to prevent email spoofing in 2021 Although email spoofing techniques are becoming more sophisticated with each passing day, there are a few tactics that can help prevent a successful email spoofing attack. These include the DMARC protocol, regular employee training, and consistent company branding. The Domain-based Message Authentication, Reporting, and Conformance (DMARC) protocol is one of the most effective defenses against email spoofing. It’s a customizable policy layer of email security that enables authentication technologies including the Sender Policy Framework (SPF) and Domain Keys Identified Mail (DKIM). In effect, DMARC can protect your organization’s domain from being used for malicious purposes. It also sets parameters for detecting forged sender information, so you can rest assured that an email is actually from the indicated sender. Read more: What is DMARC? As with most cybersecurity efforts, employee training helps create a safeguard against attacks that are able to slip past technical defenses. Set aside time at least once per year (if not more frequently) to teach your employees what to look for in a legitimate email as opposed to a spoofed one. Then, perform follow up tests to see who may still fall victim to a spoofing attack. This will help ensure everyone on your team has the right information to act appropriately when a spoofed email inevitably lands in their inbox. Email spoofing can impact your customers, too. That’s why consistent company branding in your marketing emails is a key element in preventing a successful spoofing attempt. Your email branding should resemble that of other marketing materials, including your website, social media accounts, and print materials. When your customers can easily recognize a legitimate email from your company, it will be just as easy to recognize one that is spoofed. Email spoofing is a constantly evolving threat As long as your business uses email to communicate internally and externally, email spoofing will be a threat. In fact, email spoofing accounted for more than $216 million in losses in 2020 alone, according to the FBI’s IC3 2020 Internet Crime Report. Spoofed emails may look different from one day to the next, but you can’t afford to let your guard down when it comes to suspicious emails.
<urn:uuid:b36bb177-d747-40b0-93fb-6307cf388e71>
CC-MAIN-2022-40
https://www.esecurityplanet.com/threats/email-spoofing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00489.warc.gz
en
0.920539
1,211
2.578125
3
Email spoofing is the fabrication of an email header in the hopes of duping the recipient into thinking the email originated from someone or somewhere other than the intended source. Because core email protocols do not have a built-in method of authentication, it is commonplace for spam and phishing emails to use said spoofing to trick the recipient into trusting the origin of the message. The ultimate goal of email spoofing is to get recipients to open, and possibly even respond to, a solicitation. Although the spoofed messages are usually just a nuisance requiring little action besides removal, the more malicious varieties can cause significant problems, and sometimes pose a real security threat. As an example, a spoofed email may purport to be from a well-known retail business, asking the recipient to provide personal information like a password or credit card number. The fake email might even ask the recipient to click on a link offering a limited time deal, which is actually just a link to download and install malware on the recipient's device. One type of phishing – used in business email compromise – involves spoofing emails from the CEO or CFO of a company who works with suppliers in foreign countries, requesting that wire transfers to the supplier be sent to a different payment location. How Email Spoofing Works Email spoofing is possible because the Simple Mail Transfer Protocol (SMTP) does not provide a mechanism for address authentication. Although email address authentication protocols and mechanisms have been developed to combat email spoofing, adoption of those mechanisms has been slow. Reasons for Email Spoofing Although most well-known for phishing purposes, there are actually several reasons for spoofing sender addresses. These reasons can include: - Hiding the sender’s true identity – though if this is the only goal, it can be achieved more easily by registering anonymous mail addresses. - Avoiding spam block lists. If a sender is spamming, they are bound to be block-listed quickly. A simple solution to this problem is to switch email addresses. - Pretending to be someone the recipient knows, in order to, for example, ask for sensitive information or access to personal assets. - Pretending to be from a business the recipient has a relationship with, as means of getting ahold of bank login details or other personal data. - Tarnishing the image of the assumed sender, a character attack that places the so-called sender in a bad light. - Sending messages in someone’s name can also be used to commit identity theft, for example, by requesting information from the victims financial or healthcare accounts. Email Spoofing Protections Since the email protocol SMTP (Simple Mail Transfer Protocol) lacks authentication, it has historically been easy to spoof a sender address. As a result, most email providers have become experts at detecting and alerting users to spam, rather than rejecting it altogether. But several frameworks have been developed to allow authentication of incoming messages: - SPF (Sender Policy Framework): This checks whether a certain IP is authorized to send mail from a given domain. SPF may lead to false positives, and still requires the receiving server to do the work of checking an SPF record, and validating the email sender. - DKIM (Domain Key Identified Mail): This method uses a pair of cryptographic keys that are used to sign outgoing messages, and validate incoming messages. However, because DKIM is only used to sign specific pieces of a message, the message can be forwarded without breaking the validity of the signature. This is technique is referred to as a “replay attack”. - DMARC (Domain-Based Message Authentication, Reporting, and Conformance): This method gives a sender the option to let the receiver know whether its email is protected by SPF or DKIM, and what actions to take when dealing with mail that fails authentication. DMARC is not yet widely used. How Emails are Spoofed The easiest way to spoof mails is for the attacker finds a mail server with an open SMTP (Simple Mail Transfer Protocol) port. SMTP lacks any authentication so servers that are poorly configured have no protection against prospective cyber criminals. It’s also the case that there is nothing stopping a determined attackers from setting up their own email servers. This is very common in In cases of CEO/CFO fraud. Attackers will register domains easily confused for the company they are impersonating, where the email is originating from – e.g. “@exarnple.com” instead of “@example.com”. Depending on the formatting of the email, it might be extremely difficult for a regular user to notice the difference. Although email spoofing is effective in forging an email address, the IP address of the computer sending the mail can generally be identified from the "Received:" line in the email header. This is frequently due to an innocent third party becoming infected by malware, which hijacks the system and sends emails without the owner even realizing it. See which threats are hiding in your inbox today. Our free Email Threat Scan has helped more than 12,000organizations discover advanced email attacks. START YOUR EMAIL THREAT SCAN To prevent becoming a victim of email spoofing, it is important to keep anti-malware software up to date, and to be wary of tactics used in social engineering. When unsure of the validity of an email, contacting the sender directly, especially if sharing private or financial information, can help to avoid an attack. - Threat Spotlight: Email Malware Impersonates Secure Bank Messages - Threat Spotlight: Real-World Spear Phishing, Initiating the Attack and Email Spoofing - Barracuda Email Security Gateway: Sender Authentication - Knowledgebase: Can I prevent spammers from spoofing legitimate users in my domain on the Email Security Gateway? How Barracuda Can Help Barracuda Email Protection uses email spoofing protection to help block spear phishing attempts before they can harm your business. In addition, Barracuda uses anti-fraud intelligence, behavioral and heuristic detection, along with domain name validation to mitigate damage caused by email fraud. Barracuda Domain Fraud Protection helps protect your brand by stopping domain spoofing and unauthorized activity. It offers an intuitive wizard to help you set up DMARC (Domain-based Message Authentication Reporting & Conformance) for unmatched protection. Do you have more questions about Email Spoofing? Contact us today!
<urn:uuid:61a7030c-5115-49f4-8a65-9d9252513819>
CC-MAIN-2022-40
https://www.barracuda.com/glossary/email-spoofing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00489.warc.gz
en
0.91181
1,340
3.328125
3
I often get asked how Autopsy and Cyber Triage are different and I started to use the terms general-purpose vs specialized digital forensics tool to answer that question. This post outlines how I am using those terms, pros and cons of example tools in those categories, and uses Autopsy and Cyber Triage as examples (because I’m most familiar with them). Digital Forensics Tool Basics Let’s start with the basics. Digital investigators are tasked with answering a set of investigative questions and different types of data are needed depending on the question. Such as: - Many law enforcement investigations rely on communications - Child exploitation cases focus heavily on pictures and movies - Intrusion cases focus more on user logins and malware The role of a digital forensics tool is to give the investigator access to digital data so that they can view and find the data. The tools often: - Parse file systems, memory and file formats to make the data useful - Provide search and analytical features to find the subset of data that is relevant - Display the data in a generic way (i.e. hex) or with a specialized viewer (i.e. movie playback) To be effective, your forensics tools need to give you efficient access to the data you need to answer your investigative question. General Purpose vs Specialized Digital Forensics Tool The key difference between a general purpose digital forensics tool and a specialized digital forensics tool is how focused it is for your investigation. Both types of tools can be used for a given type of investigation, but the specialized tool will do more of the work. - A general purpose digital forensics tool focuses on parsing and displaying data. It relies on you to figure out what kinds of data are relevant to the investigation (i.e. do we care about what programs were run in the past?) and what makes data suspicious. These tools are designed to help in nearly any kind of investigation as long as you know where you should be looking. - A specialized digital forensics tool is focused on a certain kind of investigation and knows what kinds of questions you are looking to answer. They have the parsing and display features, like general purpose tools, but will also use analytics to help you answer your investigative questions. They require less work from the user because they are doing some of the work for them. The easiest analogy is a swiss army knife vs a power tool. Swiss Army Knives have knives, scissors, screw drivers, corkscrews, and much more. You could probably build a house with ones that have a saw blade, but it will take a while. Your house will be completed faster with some more specialized saws and drills that were built for the job. Forensic tools are similar. You can use a general purpose tool and apply more of your time and energy. Or, use a specialized tool and let it do more of the work. General Purpose Pros/Cons General purpose is the most popular category of digital forensics tools and many of the popular tools are, such as: Let’s look at the Pros and Cons. - Provide basic coverage to nearly all types of investigations. - Could be the only option for some types of investigations. - Requires the user to have extensive training to know where to look and what values are relevant. - Critical data could be missed if the user forgets to look for it. In our toolset, Autopsy is a general purpose tool. It supports a range of file systems for computers and mobile devices. - If your investigation involves geo coordinates, there are KML, GPS, and EXIF modules and map viewers for that. - If your investigation involves web artifacts, then there are modules for common browsers and UIs for that. - If your investigation involves pictures, then there are EXIF, object detection, and hash set modules for that Autopsy is used around the world for a variety of investigations by users who know what types of data they need to answer their investigative questions. For example, you will need to review the communications and websites to know if they are relevant. There are fewer specialized end-to-end tools. The most obvious ones that come to mind are: - Cyber Triage: Collects and analyzes artifacts needed for questions about intrusions. - Volexity Surge and Volcano: Collects and analyzes memory to answer questions about intrusions. There are integrations or partial solutions that focus on a specific investigative question, such as: - Project Vic: Is integrated into numerous tools to answer questions about known child exploitation material. - IOC Scanners: There are several tools that have an indicators of compromise (IOC) scanner that answers questions about known attack traces. - Incident Response Collection Tools: There are specialized collection tools for incident response that will save select artifacts instead of a full disk or memory image, but they do not do analysis of the data. I think that most of these tools are focused on intrusions because those types of cases have the most complexity and biggest scale. Intrusion investigations can impact dozens or hundreds of computers and attackers are constantly changing their techniques to evade detection. This makes automation and specialization critical. - Faster: Can more quickly focus on the relevant data because the software is doing the analytics without user intervention. - Comprehensive: Can collect and process more data because the computer can analyze data in parallel and can be updated to know where to collect from. - May not be useful for all of your investigation types. Consultants and law enforcement often have to deal with a variety of investigation types and many specialized tools will be required. In our toolset, Cyber Triage is a specialized forensics tool. It collects only artifacts and files that are likely relevant to an intrusion and applies analytics to each to give them a score. Artifacts scored as Bad or Suspicious are likely to be involved with an intrusion and the user can start with those tens of artifacts instead of the thousands of collected artifacts. Our users tell us that Cyber Triage shaves hours off of their intrusion investigations, which is what you’d expect for a specialized tool. But, it is not going to be useful for all types of investigations, such as a child exploitation case. What Should Be In Your Tool Kit Unless you are on a team with a single mission, such as a Security Operations Center (SOC), then you should probably have a combination of general purpose tools and specialized tools. - A general purpose computer/laptop tool - A general purpose cell phone tool - Specialized tools / plugins for your most common and time-consuming investigations As data sets become bigger and investigations rely on access to more types of data, the need for specialized digital forensics tools becomes more obvious. Especially when you consider the complicated data types involved with intrusions and attackers covering their tracks. The main difference between Autopsy and Cyber Triage is that Autopsy is a general-purpose forensics tool and Cyber Triage is a specialized intrusion forensics tool. Both serve critical roles in examiner’s toolboxes. If you want to try Cyber Triage, you can fill out the evaluation form here.
<urn:uuid:d92e33b2-2a02-4281-95c0-2fa87319c177>
CC-MAIN-2022-40
https://www.cybertriage.com/blog/digital-forensics-tool-kit/general-purpose-vs-specialized-digital-forensics-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00689.warc.gz
en
0.932639
1,534
2.6875
3
Most ransomware attacks target desktop computers, servers, and high-value storage networks — but in recent years, a growing number of attacks have targeted smartphones and other mobile devices. Below, we’ll look at a few examples of how mobile ransomware works. We’ll also provide tips for preventing malware from endangering your data. Can ransomware infect an iPhone or Android phone? To date, all mobile ransomware has targeted Android devices. iOS devices like iPhones are highly resistant to malware, but malicious users have exploited vulnerabilities to trick users into believing that their devices were infected with ransomware. While Android mobile operating systems include safeguards to protect against malicious software, those protections have limits, particularly when attackers use social engineering to trick users into downloading files. Notable recent incidents: - In 2014, the Koler “police” ransomware began to spread using pornographic networks. Android users are tricked into downloading a fake .apk file, which encrypts files and demands a ransom of $100-$300. By some estimates, Koler has infected more than 200,000 Android devices. - Doublelocker is a Trojan that infects Android devices, changing their PINs and encrypting data to prevent access. It is typically distributed as a fake Adobe Flash Player. Victims are tricked into providing the malware with administrator rights and permissions. - AndroidOS/MalLocker.B uses social engineering to lure victims into installing fake versions of games or popular apps. Instead of encrypting files, this ransomware family prevents the user from accessing their device by forcing a ransom note onto every screen. To protect your phone from ransomware, follow these steps. Cybercriminals use sophisticated methods to install malware and extort users. If you store important data on your phone (and realistically, that’s true for nearly every smartphone user), you’ll need to protect yourself. Some quick steps to keep your device secure: 1. Back up your data. The best practice is to maintain at least three copies of every important file. However, even a single backup will provide enormous protection — provided that your backup isn’t prone to infection. Here’s a guide from Google for backing up data manually on an Android device. For Apple iPhones and iPads, this guide explains the process for using iCloud. In addition to cloud backups, consider copying important data (such as pictures, videos, and contacts) to your PC or Mac on a regular basis. 2. Don’t install apps unless you can verify the source. Most Android ransomware variants infect devices by posing as free games, utilities, or video players. By default, the Android operating system blocks users from installing .apk files from unknown sources, but users can disable this protection. The safest course of action: Never download an .apk file through your phone’s web browser, and never open email attachments if you’re not completely confident in the contents. Only install apps using the Google Play store or another trusted app store. If you do need to install an .apk manually, make sure you trust the source. Check that the website has valid security certificates. After installing the app, head to Android Settings -> Biometrics and Security and change the “Install unknown apps” settings back to the default. 3. Keep your device updated. Malware often spreads by taking advantage of security vulnerabilities within apps or the mobile operating system. Make sure your phone updates its operating system automatically. Here’s a guide to enabling auto-updates on Android, and here’s a guide for enabling automatic updates on iOS devices. Android users should also enable automatic updates on apps downloaded from the Google Play store. 4. If your smartphone is infected with ransomware, have a game plan. If you believe your phone is infected with malware, take notes about any symptoms, including actions that may have led to infection (such as downloading an APK file or downloading email attachments). If your phone displays a ransom note, take a picture with another device or write down the note in its entirety. Decryption tools exist for many common Android ransomware variants. If you have a strong working knowledge of the Android operating system, NoMoreRansom.org is a trustworthy resource for finding free decryption tools. However, we recommend working with an experienced ransomware recovery firm to maximize your chances of a successful result. To learn more, contact our team at 1-800-237-4200 or click here to set up a case online.
<urn:uuid:36f1b775-ecbd-4a79-874a-af38697fc8cc>
CC-MAIN-2022-40
https://datarecovery.com/2022/04/protect-your-phone-from-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00689.warc.gz
en
0.884936
927
2.6875
3
In a recent survey by chip and PIN, an initiative in the U.K. to get smart chips on credit and debit cards, most respondents said they are comfortable about remembering their personal identification number (PIN), and less than four per cent said they felt they have terrible memories. But that didn’t stop one expert from offering suggestions on how to improve PIN recollection. The research, conducted with 1,814 people across the country, showed that men think they have better memories than their female counterparts. Out of male respondents, 5.6 per cent said they thought they have “brilliant” memories, compared to 4 .3 per cent of women, and surprisingly high numbers of both sexes — 17.1 per cent of men and 14.7 per cent of women — said they never forget anything important. Just 3.2 per cent of respondents think they have “appalling” memories. This all bodes well for chip and PIN, which by 2005 aims to have the majority of plastic card transactions in the U.K. verified by card holders entering a secret four digit PIN rather than signing a receipt. However, as the new technology rolls out, users might become concerned about remembering their PIN when they reach the till — especially if they’ve got more than one card, chip and PIN said in a press release. According to Donna Dawson, a psychologist specializing in personality and behaviour, card users can improve their memories by using a few simple tips and tricks. She says it all comes down to engaging both sides of their brain instead of just one when they’re trying to recall those four digits. “On the surface, numbers appear to relate only to the logical parts of our brains,” Dawson said in the statement. “To make them more memorable, numbers must appeal to the creative side of our brains as well.” Dawson recommended using methods of association such as visualizing numbers as objects relating to their shapes, linking them to important dates, or rhyming them with other words to help recall PINs. “Using these methods, the vast majority of people will be able to remember more than one four-digit PIN simply and quickly,” she said. She suggested one method called the “number shape” system where you imagine every number as an object related to its shape. “For example, you could picture a zero as a football, the number one might be a pencil and the number two a swan or a snake. Once the associations are made, you can create a story to remember a sequence of numbers, like a four-digit PIN. If, for instance, your PIN was ‘2021’, you would imagine a snake (2) playing football (0) with a swan (2) who was writing down the scores with a pencil (1).” If it all sounds a bit strange, Dawson explained that’s part of the goal. “Believe it or not, the crazier the story, the stronger the chance of remembering it,” she said.
<urn:uuid:5ee19ea7-e828-4e4d-be66-acf4592fd3bb>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/pinning-down-the-right-digits/17311
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00689.warc.gz
en
0.961131
640
2.671875
3
The term ‘DevOps’, coined in 2009, refers to a culture that promotes a holistic approach to the software development lifecycle (SDLC) by building stronger relations between the software development and IT operation divisions in an organization. The main objective of DevOps is to completely break down barriers between development and operations, so as to establish an effective working environment that is transparent, rapid, responsible, and smart, for the overall success of a project. How did DevOps originate? DevOps has its roots in Agile software development methodology which focuses on building a collaborative environment between the client and vendor, promoting cross-functional working between the various divisions in an organization, and encouraging systems thinking. The Agile method has proved to be very efficient with successful outcomes. DevOps implements new techniques and strategies to approach the Agile philosophy. It attempts to eliminate the practice of silos, in order to build a more integrated work atmosphere, for the effective and smooth running of an organization. Though Agile and DevOps may seem similar, they are not the same. While the Agile methodology focuses on addressing the issues between the development team and customer needs, DevOps focuses on bridging the gap between the development and operation teams. What are the benefits of practicing DevOps? One of the biggest advantages of adopting DevOps is that it breaks down silos. By getting rid of silos, organizations can build stronger relationships without any hierarchical distributions of power that can result in miscommunication and conflicts. Another major benefit of incorporating the DevOps culture is that organizations can get more work done in a short span of time. The efficiency of employees increases when they work in a collaborative environment and this, in turn, will help them achieve their targets faster. Practicing DevOps brings stability in the workings of an organization. This, in turn, builds an atmosphere of reliability and stability — a scenario which reduces the chances of failure by 50 percent. The DevOps culture encourages team members to think of their roles as part of contributing to the bigger whole – the development of the organization itself. This improves their sense of responsibility to their work and makes them more accountable. How can DevOps be implemented? The best method to adopt DevOps is to implement it slowly, little by little. It takes time for people to break free from silos and adapt to an integrated environment, accept and trust new people and ideas. It is advisable to give people the time to adopt DevOps, and to improve their confidence in practicing it. One way to establish an integrated environment is to encourage systems thinking and avoid making one person or a particular team responsible for a failed action or result. Numerous organizations have benefited by adopting the DevOps culture in recent years. Though the process of adopting and practicing DevOps can seem a little challenging initially, it becomes easier to overcome the challenges once one gets accustomed to its principles and functions. CloudNow is a software development company that strongly believes in adopting an Agile-DevOps approach throughout the software development lifecycle (SDLC). For DevOps services that deliver exceptional results, contact us today.
<urn:uuid:a76b3439-af84-4b48-b092-a9332d784088>
CC-MAIN-2022-40
https://www.cloudnowtech.com/blog/what-is-devops-and-its-benefits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00689.warc.gz
en
0.948463
636
3.203125
3
What are Slow Logons and Where Do They Come From? Considering the importance of fast logons for a good user experience there is surprisingly little information on the subject. Windows does not even record the total logon time, let alone where it is spent. Administrators wishing to analyze their users’ logons are left in the dark. But that can be changed. A logon is the act of preparing a session for a user. It consists of several phases, each of which can take between mere milliseconds and several minutes: - User profile loading - Group policy processing - Logon script processing - Shell startup A session can only be initiated after the user’s credentials have been verified. This typically happens by talking to an Active Directory domain controller over the network. Delays during this phase can typically be traced back to DNS issues, as can most AD problems. Once the user is authenticated the operating system can start to build his session. First, a user profile is needed. All of the following phases require the user profile to be set up and the user’s registry hive to be loaded. If the user does not already have a profile a new one is created. This slows down the initial logon quite a bit compared to subsequent logons. The main reason is that Active Setup runs the IE/Mail/Theme initialization routines. Problems with the loading of user profiles are legendary; however, if this takes very long then probably a huge roaming profile needs to be copied over the network. Many corporations use Group Policy to manage their Windows PCs. Built on top of Active Directory, policies rely on the directory service’s infrastructure for their operation. As a consequence, DNS and AD issues may affect Group Policies severely. One could say that if an AD issue does not interfere with authentication at the very least it will hamper group policy processing. Although considered a legacy technology by some logon scripts are still used extensively so set up the user environment. Many logon scripts are ancient, dating back to the days of NT4. Administrators often are reluctant to change anything for fear of breaking the age-old construct, so they just add something new to the end. Always adding, never deleting – that cannot be good for performance, and consequently bloated logon scripts are a common cause for slow logons. During the last logon phase the user shell, typically Explorer.exe, is initialized. Loading Explorer itself is typically fast, but a large number of autoruns can make this phase seem to stretch forever. As mentioned earlier Windows does not have any counters for measuring logon performance. Administrators wishing to analyze slow logons need to bring in additional tools. Among the most helpful for troubleshooting individual logons are Sysinternals Process Monitor and the Windows Performance Toolkit (WPT, containing the xperf tools). Both work in a similar way: the administrator starts a tracing session, tries to reproduce the problem by logging on, stops the trace, and then goes to work sifting through the trace looking for cues – a very time-consuming process that requires skilled personnel and is only performed when the pressure is already high. Would it not be great to have the total duration of every single logon, optionally broken down into phases? That way it would be a snap to spot trends and see the effects of GPO changes. Today’s slow logons could be traced to yesterday’s failed domain controller or last week’s logon script change. uberAgent gives you all that. It has averages: It shows trends, giving you the big picture: And it displays as much detail as you could possibly want:
<urn:uuid:917e7316-658e-4c73-b4a4-aeca7474b104>
CC-MAIN-2022-40
https://helgeklein.com/blog/what-are-slow-logons-and-where-do-they-come-from/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00689.warc.gz
en
0.942313
758
2.71875
3
Anton Rager developed such a tool in 2005 and demonstrated that XSS proxies can be used to sustain an XSS attack indefinitely (as long as the hijacked browser window exists) and allow the attacker to view or submit content to the vulnerable server, in the victim’s place. Essentially, the attacker hijacks the victim’s session to a vulnerable server in order to plug the victim’s browser into the XSS Proxy tool, and then use it to force the victim’s browser to access content on the same domain as the vulnerable server – for as long as the victim keeps the session active. Examples of scenarios where normal XSS is not viable These examples assume there is an XSS vulnerability on a target server, which enables an attacker to perform an XSS attack against it, but because of the context and other variables, such an attack would not be viable and would not render the desired results. However, the initial XSS condition is still important as it allows the attacker to take control of the victim’s browser and plug it into the XSS Proxy tool. Thus, XSS becomes the premise for other attacks that in turn achieve the desired results, rather than remaining the main mean of achieving them. a) HTTP Authentication b) IP-based access control methods These methods prevent the attacker from gaining access to a target server, even though he manages to steal credential information required, because they cannot impersonate the victim’s IP (which is granted access) and their IP is not in the “allowed” list. However, if they can manipulate the victim’s browser through an XSS Proxy tool, they can load documents and gain access to web pages by forcing the victim’s browser to access them and forward them to the XSS Proxy. c) Client-Side Certificates When certificates are used for creating secure SSL /HTTPs, the attacker cannot impersonate the victim in order to get access to the target server because they cannot steal and use the victim’s certificate. However, if they can control the victim’s browser, they can gain access to restricted data, in a similar way as in the example above. Limitations of XSS Proxy tools How to prevent XSS Proxy attacks Users should avoid keeping sessions active for extensive periods of time, to reduce the risk of attackers maintaining control of their browser. Users should also be wary of what they click on, and avoid the common ways of getting XSS-ed, such as clicking on malicious Flash objects, or clicking links in SPAM emails. This way, they reduce the likelihood of being victims of the initial XSS attack that establishes the connection to the XSS Proxy. Get the latest content on web security in your inbox each week.
<urn:uuid:6f0bf474-3793-4338-b10e-2cef4a2e13b3>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/elaborate-ways-exploit-xss-xss-proxies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00689.warc.gz
en
0.908918
1,197
2.796875
3
The primary goal of any web application or service is to render high-quality performance and meet the expectations of its customers. It is possible that a web application may receive millions of requests per second and hence, it is not practically possible for a single system to respond to each of these incoming requests. This is precisely where a cluster architecture plays a vital role. Put simply; it is a system of interconnected nodes that cooperate with each other in sharing their workload. For example, there is a web application, and it is supported by a group of servers to handle the incoming requests. This group of servers not only helps a web application in efficiently balancing the overall load but also increases redundancy in case if one of the servers in the group fails or breaks down. Especially for e-commerce web applications that require high performance with minimal downtime, server clustering ensures that the supporting infrastructure is working as needed even if one or more servers in the cluster are down. Some of the most common types of clusters include failover (high availability) clusters, load balancing clusters, and high-performance clusters. A failover cluster replicates existing servers and redundant hardware and software reconfiguration to increase service availability. Every constituent system is continuously monitoring the other so that if one node fails, the request is dealt with by other nodes. Such clusters are generally used by users with a high dependency on computer systems such as news websites, e-commerce businesses, etc. Specifically used in complex scientific applications, high-performance clusters run parallel programs for time-exhaustive calculations. By intelligently sharing the workload among the connected nodes, high-performance clusters are able to increase performance significantly. Other industries relying on high-performance clusters include graphics and video processing industry, augmented and virtual reality, etc. As the name clearly suggests, load balancing clusters distribute loads among the constituent servers for providing increased network capacity. Since the nodes are integrated, the incoming requests are evenly distributed across the connected nodes. However, it must not be interpreted that the entire system is working together; instead, the incoming requests are evenly distributed as they arrive. Load balancing clusters are preferred by e-commerce businesses and internet service providers for managing a large number of user requests, while at the same time, maintaining a satisfactory level of performance. In order to provide our clients with state of the art SIEM and SOAR services, Logsign is supported by a mixed cluster architecture at the backend to support easier scalability, extended availability, efficient load balancing, effective self-healing, and necessary redundancy. Depending upon the network size, Logsign can be either installed on a single cloud and a physical server, or on multiple servers at once. Logsign’s constituent services are operated as an active-active cluster. These services are delivered through various servers for meeting performance expectations and optimized conditions. Logisign’s cluster architecture allows horizontal as well as vertical stability. By utilizing Logsign’s cluster architecture, the received data clones are available on multiple servers. If one or more than one servers are damaged, the rest of the servers keep operating efficiently as if nothing has happened. Further, we block direct access from untrusted/unverified users in order to protect our clustered architecture from unexpected user behavior. With clustered architecture in place, security measures like demilitarized zone (DMZ) and firewalls are also implemented. The feature set for our scalable cluster infrastructure consists of: Modern cyber threats are more sophisticated and fast such as malware, phishing, cryptojacking, and IoT threats.
<urn:uuid:4d51c33d-c84d-4aaf-a644-0fc380f3339e>
CC-MAIN-2022-40
https://www.logsign.com/blog/what-are-the-benefits-of-cluster-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00689.warc.gz
en
0.933297
728
2.890625
3
Platform.sh announced it has partnered with MongoDB. For teams moving to test-first methodologies, helping developers and testers acquire new technical skills, workflows and processes is job one. With the mainstreaming of Agile and DevOps, software testing has been turned upside down. No longer a manual, last-stage effort of development, much of testing is now being automated and moved in front of coding. Developers must write code that will pass each test, ensuring that quality is built into products from the beginning. Meanwhile, testing earlier in the process speeds up the release of code for production. Consequently, specified tests can now replace verbose requirements as the primary design document – and that's quite a mind shift for developers and testers alike. Test-first methodologies are quickly gaining in favor by software development teams. 51 percent of organizations have implemented test-first methodologies, including 37 percent in the past year, according to a recent global survey of 200 software testers conducted by our company. The majority (48 percent) said that improving software quality was a top benefit of adopting test-first development. Test-first development practices and processes are still maturing. The practice has been defined across three broad terms: Behavior-Driven Development (BDD), Test-Driven Development (TDD) and Acceptance Test-Driven Development (ATDD). TDD is an umbrella term for the practice, although it has traditionally focused more on technical, unit testing. BDD and ATDD incorporate user requirements and business benefits along with technical testing. BDD differentiates through its adoption of specific automation frameworks and language for writing and running tests. When implemented effectively, test-first development creates significant efficiencies in the development cycle and also prevents costly and time-consuming redesign late in the development cycle. Another benefit is that automated tests (or their components) can be reused when new code is written. That saves time and helps standardize the testing process. Test-first development also allows for easier separation of features so that teams can improve pieces of the application separately. Helping teams transition smoothly to a new way of working is the first priority, over purchasing tools. Here's what you need to consider from a people and skills development perspective: Both developers and testers will need to gain new skills and learn how to work differently. No longer can these groups work in isolation, but they need to work together and cross train in order for test-first development to work. Testers will need more technical skills than ever before, specifically learning basic development techniques to work with new test automation frameworks. Manual testing should be used judiciously as it can cause delays with the quick code revisions that occur in TDD and BDD environments. Testers will also benefit from high-level knowledge of modern programming languages such as Ruby or Python. The good news is, those languages are easier to learn than Java and.NET, and there are plenty of online and in-person courses to help someone get up to speed. Some test automation platforms are language specific, but many of the open source ones are language agnostic so testers and developers can both contribute in the languages of their choice. All testers don't need to code per se, but understanding how the languages work is tremendously helpful for assisting in test design and debugging. Developers need to understand testing processes and gain an appreciation for testing overall. While the latter requires education and incentives from senior levels of the organization, the technical piece shouldn't be hard for most developers to pick up. It requires creating the test immediately as part of the design phase, and then writing and running code against that test for it to pass. Again, understanding of automated testing platforms is critical. Developers should also integrate the use of monitoring and logging tools into their work to mitigate the inevitable missed bugs that will happen as developers begin writing their first tests. The biggest hurdle for developers is that they are used to working independently, and are motivated (especially today) by speed. Helping developers understand the need to balance quality, user experience and speed is a critical responsibility of organizational leaders when moving to test-first development. Progressive software development teams are already adopting more collaborative practices, given the move to Agile practices and DevOps processes. Because of the tighter integration between developing and testing in test-first development, team members will need to work together closely and exchange information frequently to keep all the wheels turning. As a result, co-locating teams is ideal. This facilitates real-time collaboration and eliminates the impact of time zone differences. Splitting testers and developers between onshore and offshore locations will make any TDD or BDD effort many times more difficult. Regardless, there are many tools to help teams stay in touch and exchange information no matter where they sit. Using code repository platforms for storing and archiving code is essential for allowing teams to have one version of the truth. Test management platforms will enable developers and testers to view, change and approve tests in popular project lifecycle platforms. Collaboration will not be easy at first. Developers, testers and business analysts are not accustomed to working together and some will invariably oppose this change. Those who can't adapt, may not have a future with the organization. Yet with the right leaders in place who can set attainable goals and incentives for change, any team can shift with a little bit of time and patience. Additional considerations for moving to test-first development include timing and organizational readiness. Given other business, leadership and operational challenges, can a major change be effectively supported now? Can the company change wholesale or will you need to operate in a hybrid manner for a period of time, with some groups still developing in traditional waterfall style and others moving to TDD or BDD? Planning ahead of time can help determine the resources you will need to make the transition without disrupting business. Once a team moves to test-first methodologies and achieves initial success, it's hard to imagine any organization wanting to go back. Test-first development is the wave of the future; it can achieve the optimal balance of user-focused quality, speed and efficiency which defines competitive advantage today in software-driven businesses. Kevin Dunne is VP of Business Development & Strategy at QASymphony.
<urn:uuid:4efe2920-5b91-480e-afa7-ca160f53be16>
CC-MAIN-2022-40
https://www.devopsdigest.com/the-people-side-of-test-first-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00689.warc.gz
en
0.943388
1,285
2.59375
3
Findings from a new study on Alzheimer’s disease (AD), led by researchers at the University of Saskatchewan (USask), could eventually help clinicians identify people at highest risk for developing the irreversible, progressive brain disorder and pave the way for treatments that slow or prevent its onset. The research, published in the journal Scientific Reports in early January, has demonstrated that a shorter form of the protein peptide believed responsible for causing AD (beta-amyloid 42, or Aβ42) halts the damage-causing mechanism of its longer counterpart. “While Aβ42 disrupts the mechanism that is used by brain cells to learn and form memories, Aβ38 completely inhibits this effect, essentially rescuing the brain cells,” said molecular neurochemist Darrell Mousseau, professor in USask’s Department of Psychiatry and head of the Cell Signalling Laboratory. Previous studies have hinted that Aβ38 might not be as bad as the longer form, said Mousseau, but their research is the first to demonstrate it is actually protective. “If we can specifically take out the Aβ42 and only keep the Aβ38, maybe that will help people live longer or cause the disease to start later, which is what we all want.” Aβ42 is toxic to cells, disrupts communication between cells, and over time accumulates to form deposits called plaques. This combination of factors is believed responsible for causing AD. Experts have long thought that all forms of Aβ peptides cause AD, despite the fact that clinical trials have shown removing these peptides from the brains of patients does not prevent or treat the disease. Mousseau said the idea behind the study was simple enough: If two more amino acids is bad, what about two less? “We just thought: Let’s compare these three peptides, the 40 amino acid one that most people have, the 42 amino acid that we think is involved in Alzheimer’s, and this 38 one, the slightly shorter version,” said Mousseau, who is Saskatchewan Research Chair in Alzheimer disease and related dementias, a position co-funded by the Saskatchewan Health Research Foundation and the Alzheimer Society of Saskatchewan. The project confirmed the protective effects of the shorter protein across a variety of different analyses: in synthetic versions of the protein in test tubes; in human cells; in a worm model widely used for studying aging and neurodegeneration; in tissue preparations used to study membrane properties and memory; and in brain samples from autopsies. In the brain samples, they also found that men with AD who had more Aβ42 and less Aβ38 died at an earlier age. The fact that they didn’t see this same pattern in samples from women suggests the protein peptide behaves differently in men and women. The USask team also included Maa Quartey and Jennifer Nyarko from the Cell Signalling Lab (Department of Psychiatry), Jason Maley at the Saskatchewan Structural Sciences Centre, Carlos Carvalho in the Department of Biology, and Scot Leary in the Department of Biochemistry, Microbiology and Immunology. Joseph Buttigieg at the University of Regina and Matt Parsons at Memorial University of Newfoundland were also part of the research team. While Mousseau wasn’t surprised to see that the shorter form prevents the damage caused by the longer version, he said he was a little taken aback at how significant an effect it had. Alzheimer’s disease (AD), the most common form of dementia, is a devastating neurodegenerative disorder that affects perhaps 30 million people worldwide, with demographic projections suggesting this will increase substantially in the coming decades . Cerebral neurodegeneration typically takes place first in the hippocampus, a region below the neocortex that is critical for consolidating long-term memories. Neuronal loss spreads to other cortical areas, leading to progressive cognitive decline. By the end stages of the disease, patients lose cognitive function to the point of requiring constant care, often institutionalized. Although a number of risk factors have been associated with AD, disease onset correlates best with age, and the large majority of cases occur in the elderly. Among people over age 85, over a third are afflicted. Two types of protein deposits are found in the AD brain: amyloid plaques and neurofibrillary tangles . The former are extraneuronal and primarily composed of the 4 kDa amyloid β-peptide (Aβ), whereas the latter are intraneuronal filaments of the normally microtubule-associated protein tau. Neuroinflammation is a third pathological feature of AD, in which microglia—phagocytic brain immune cells that release cytokines—become overactivated . The role of each of these features in AD etiology and pathogenesis are not well understood. However, Aβ aggregation—in the form of oligomers, protofibrils, fibrils, and plaques—is generally observed as the earliest pathology, followed by tau tangle formation and neurodegeneration . For this reason and those mentioned in the next section, pathological Aβ is widely considered the initiator of AD, triggering downstream tau pathology and neuroinflammation, and Aβ has been the primary target for the development of AD therapeutics for over 25 years . Familial AD and Genetics As mentioned above, the “amyloid hypothesis” of AD pathogenesis has reigned for decades, and AD drug development has largely focused on inhibiting Aβ production, blocking Aβ aggregation, or facilitating Aβ clearance from the brain . The primary basis for this dogma is the discovery in the 1990s of dominant genetic mutations associated with early-onset AD [7,8,9,10]. This familial AD (FAD) has a disease onset before age 60 and can occur even before age 30. Other than the monogenetic cause and mid-life onset, FAD is closely similar to the sporadic AD of old age with respect to pathology, presentation, and progression. The most parsimonious explanation is that similar molecular and cellular events are involved in the pathogenesis and progression of both forms of the disease. The first genetic mutations associated with FAD were in the amyloid precursor protein (APP) . This gene encodes a single-pass membrane protein that is initially cleaved by β-secretase, a membrane-tethered aspartyl protease in the pepsin family, to release the large APP ectodomain (Figure 1). The remnant C-terminal fragment (APP CTF-β) is then proteolyzed within its transmembrane domain (TMD) by γ-secretase to produce Aβ, which is then secreted from the cell . Most Aβ is 40 residues in length (Aβ40), but a small portion is the much more aggregation-prone 42-residue form (Aβ42). Although Aβ42 is a minor Aβ variant produced through APP CTF-β processing by γ-secretase, it is the major form deposited in the characteristic cerebral plaques of AD . The 27 known APP mutations associated with FAD , each devastating different families, are all missense mutations in and around the small Aβ region of the large APP. These mutations either alter Aβ production or increase the aggregation tendency of the peptide . A double mutation just outside the N-terminus of the Aβ region in APP increases proteolysis by β-secretase, leading to elevated APP CTF-β and therefore elevated Aβ overall. Mutations in the TMD near γ-secretase cleavage sites elevate Aβ42/Aβ40, and mutations within the Aβ region itself make the peptide more prone to aggregation. FAD mutations were then discovered in presenilin-1 (PSEN1) and presenilin-2 (PSEN2) , genes encoding multi-pass membrane proteins that at the time had no known function. These missense mutations—now with over 200 known , all but a dozen or so in PSEN1—are located throughout the sequence of the protein but mostly within its nine TMDs [16,17]. Presenilin FAD mutations were soon found to increase Aβ42/Aβ40 [18,19,20], further strengthening the idea that this ratio is critical to pathogenesis. Moreover, these findings indicated that presenilins can modulate γ-secretase cleavage of APP substrate, as the FAD mutations altered the preference for cleavage sites by the protease. Soon after came the observation that knockout of PSEN1 dramatically reduced Aβ production at the level of γ-secretase , with the remaining Aβ production attributed to PSEN2 (later verified [22,23]). Thus, presenilins are required for γ-secretase processing of APP CTF-β to Aβ peptides. Presenilin and the γ-Secretase Complex Meanwhile, the design of substrate-based peptidomimetic inhibitors suggested that γ-secretase is an aspartyl protease [24,25,26]. Peptide analogs, based on the γ-secretase cleavage site in the APP TMD leading to Aβ production and containing difluoroketone or difluoroalcohol moieties—mimetics of the transition state of aspartyl protease catalysis—were effective inhibitors of γ-secretase activity in cell-based assays. Given the requirement of presenilin for γ-secretase activity, the site of proteolysis of APP within its TMD, the multi-TMD nature of presenilin, and the evidence that γ-secretase is an aspartyl protease, the possibility was raised that presenilin could be a novel membrane-embedded protease. Indeed, two conserved TMD aspartates were found in presenilins, and both aspartates were required for γ-secretase activity . Subsequent reports that affinity-labeling reagents based on the transition-state analog inhibitors of γ-secretase bound directly to presenilin cemented the idea that presenilin is an unprecedented aspartyl protease with its active site located within the lipid bilayer [28,29]. Although presenilins appeared to be unusual aspartyl proteases, it was clear that they did not have this activity on their own. Presenilins themselves undergo proteolysis within the large loop between TMD6 and TMD7 to form an N-terminal fragment (NTF) and C-terminal fragment (CTF) (Figure 2). The formation of PSEN NTF and CTF is gated by limiting cellular factors , and these two presenilin subunits assemble into a larger complex [32,33]. Biochemical analysis and genetic screening ultimately identified three other components of what became known as the γ-secretase complex [34,35,36]. These three components, membrane proteins nicastrin, Aph-1, and Pen-2, assemble with presenilin, activating an autoproteolytic function of presenilin to form PSEN NTF and CTF. [The two essential TMD aspartates of presenilins are also required for PSEN NTF/CTF formation .] This assembly with cleaved presenilin is the active form of γ-secretase. Indeed, the transition-state analog affinity labeling reagents that tag presenilins specifically bound to PSEN NTF and CTF [28,29], suggesting that the active site of the protease resides at the interface between these two presenilin subunits. This idea is consistent with the observation that one of the essential aspartates is in TMD6 in the PSEN NTF, and the other is in TMD7 in the PSEN CTF. Soon after the discovery of presenilin as the catalytic component of γ-secretase, analysis of the other proteolytic product of γ-secretase cleavage of APP (AICD), revealed that the APP TMD was proteolyzed at two different sites [37,38,39,40,41]. Cleavage at the second (ε) site releases AICD products composed of residues 49–99 or 50–99 of the 99-residue APP CTF-β substrate for γ-secretase (Figure 3). With secreted Aβ peptides ranging from 38–43 residues (Aβ38-Aβ43), this left 5 to 11 APP TMD residues unaccounted for. Subsequent discovery of Aβ45, Aβ46, Aβ48, and Aβ49, but no N-terminally extended AICD peptides, led to the hypothesis that ε proteolysis occurs first [42,43,44]. The generated Aβ48 and Aβ49 (counterparts of AICD49-99 and AICD50-99, respectively) were postulated to undergo tripeptide trimming along two pathways: Aβ49→Aβ46→Aβ43→Aβ40 and Aβ48→Aβ45→Aβ42→Aβ38 (this last cleavage step generating a tetrapeptide coproduct). Mass spectrometric analysis of the small peptide products supported this notion , as did the finding that synthetic Aβ49 is primarily processed to Aβ40 and Aβ48 is primarily trimmed to Aβ42 by purified γ-secretase . Kinetic analysis of trimming of synthetic Aβ48 and Aβ49 by five different FAD-mutant γ-secretase complexes revealed that all five were dramatically deficient in this carboxypeptidase trimming activity . Presenilin and the γ-secretase complex have many more substrates besides APP . Indeed, so many substrates have been identified that the γ-secretase complex has been called the proteasome of the membrane , implying that one of its major functions is to clear out membrane protein stubs that remain after ectodomain release by sheddases. While membrane protein clearance may be an important function of γ-secretase, the protease also plays essential roles in certain cell signaling pathways. The most important of these is signaling from the Notch family of receptors . Notch receptors are single-pass membrane proteins like APP, and proteolytic processing of Notch, triggered by interaction with cognate ligands on neighboring cells, leads to release of its Notch intracellular domain (NICD) (Figure 4). The NICD translocates to the nucleus and interacts with specific transcription factors that control the expression of genes involved in cell differentiation. These signaling pathways, particularly from Notch1 receptors, are essential to proper development in all multi-cellular animals. Knockout of presenilin genes in mice is lethal and leads to phenotypes that are virtually identical with those observed upon knockout of the Notch1 gene [50,51], findings that, as explained later, have major implications for the potential of γ-secretase inhibitors as AD therapeutics. reference link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7828430/ More information: Maa O. Quartey et al, The Aβ(1–38) peptide is a negative regulator of the Aβ(1–42) peptide implicated in Alzheimer disease progression, Scientific Reports (2021). DOI: 10.1038/s41598-020-80164-w
<urn:uuid:78b306a6-ee63-4e7e-93b8-4042f1efdc5a>
CC-MAIN-2022-40
https://debuglies.com/2021/02/10/alzheimers-disease-a%CE%B238-beta-amyloid-can-help-people-live-longer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00089.warc.gz
en
0.938141
3,339
3.328125
3
Blockchain-Based Data Stores for Healthcare The healthcare industry provides an excellent opportunity for adopting blockchain technology. Blockchains are designed to maintain the integrity of data. They are immutable—committed data history that cannot be altered or deleted; it can only be updated. For example, using blockchain for patient data would include the following information serving as a digital ledger (building blocks that help make blockchains immutable) that could not be altered: - Previous & current medications - Past evaluations - Past diagnoses - Previous treatment records With a blockchain approach, clinicians are empowered to deliver appropriate care based on a trusted and complete history of a patient’s health records while also increasing patients’ engagement with and ownership over their healthcare data. Dovel has developed a blockchain solution that creates this environment for immutable data while adding a new block of data that holds changes. Ledgers have been around for a long time and were typically used to record a history of economic and financial activity between two or more parties. Ledgers found in a blockchain usually consist of the following: - Current and historical state: A data structure that keeps the current and historical state values, allowing applications to easily access the data without needing to traverse the entire transactional log. - A journal: A transactional log that keeps a complete record of the entire history of data changes. The transactional log is append-only, meaning that each new record is chained to the previous, allowing you to see the whole lineage of data’s change history. Additionally, with the help of cryptographic hashing, a process that assigns a unique identifier (like a fingerprint) to each record, blocks are chained to one another. This allows ledgers to have a timekeeping property enabling anyone to look back in time and get proof that the data transaction occurred, making auditing simple. Compare this to relational databases where customers have to engineer an auditing mechanism because the database is not inherently immutable. Such auditing mechanisms built with relational databases can be hard to scale. They put the onus on the application developer to ensure that all the correct data is being recorded. The auditing mechanism is easier with blockchain since you can easily audit the history of changes to a patient health record, unlike a traditional SQL database, which requires custom coding of some sort to track database changes. A doctor can easily communicate lab results, visit histories, etc., with a patient, and that patient can easily access those records from a centralized place. Everyone is confident that the records are accurate and haven’t been tampered with or seen by anyone else. Centralized Ledgers Enable Compliance Many organizations are interested in blockchain because they need a transparent, immutable, and cryptographically verifiable ledger. Regulatory compliance is a fact of life for healthcare companies. A blockchain approach enables organizations to easily track the controls in place and understand how they have changed over time. In addition, the ledger component of a blockchain solves the problems around data integrity and audit functionality. *This image contains features of a centralized ledger. Blockchain-Based Centralized Ledger At Dovel, we’ve utilized Amazon’s Quantum Ledger Database (QLDB) to build such a solution. Our blockchain-based centralized ledger makes it easy to understand how application data has changed over time, eliminating the need to construct complicated audit functionality within the application. An Amazon QLDB journal is an immutable log where transactions are appended as blocks of data. These blocks are connected with a hash of data (or an immutable cryptographic signature) for verifiability using cryptography. After a transaction gets written as a block into the journal, it cannot be changed or deleted—it becomes a permanent record. *This image is an example of a QLDB Journal To ensure that data is well-maintained in the journal, it retains two tables known as the “current state” and “history” tables. The current state table contains the latest state of the data. The history table includes the historical changes made to the data. In our demo solution, you can see how to enter patient data collected in an exam and see what happens as you continue to update the record, with new blocks of data added. You can check it out for yourself on our Discover platform.
<urn:uuid:b0b63172-2b59-4799-bdfb-bc8ba950367c>
CC-MAIN-2022-40
https://doveltech.com/innovation/blockchain-based-data-stores-for-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00089.warc.gz
en
0.930313
877
2.53125
3
I would like to talk to you about DNS Blacklists. Most IT Administrators that deal with mail have heard of this term but some components are still unclear. This post aims to clarify these doubts and is split into 3 sections: 1) Definition of DNS Blacklists 2) How they work 3) Use of DNSBlacklists. I have also added a bonus section at the end, which explains how to test DNSBlacklists. Domain Name System Blacklists (DNSBLs) are spam blocking lists which contain a number of IPs (mailserver IPs) which have been reported as sending out spam. These lists are based on the Internet’s DNS, which converts IP addresses such as 220.127.116.11 into domain names like spam.com, making the lists much easier to read, use, and search. DNS Blacklists may also include a zombie check. These particular DNS Blacklists therefore check the addresses of zombie computers or other computers being used to send spam, listing the addresses of ISPs who willingly host spammers, or addresses which have sent spam to a honeypot system. Many anti-spam software programs such as GFI MailEssentials and Ninja Blade use these lists to control Spam by blocking any email that originates from one of these domains. These lists are developed and maintained by organisations such as SORBS and SpamHaus. The three basic components that make up a DNS Blacklist are the following: - A domain to host it under. - A name server to host that domain. - A list of addresses to publish the list. The following four steps explain what is done when a mail server checks an email sender againts a DNS Blacklist: - The receiving mailserver takes the sender’s mail server IP address, say, 18.104.22.168 and reverses the order of octets, yielding 22.214.171.124 - It appends the DNSBL’s domain name: 126.96.36.199.dnsbl.example.org - It looks up this name in the DNS as a domain name (“A” record). This will return either an IP address, indicating that the sender is listed; or an “NXDOMAIN” (“No such domain”) code, indicating that the sender is not. - If the sender is blacklisted, what is done with the email then depends on what actions you configure on your mail server or anti-spam software. DNSBLs are used by spam blocking software like GFI MailEssentials where different blacklists are given point scores by SpamRazer, GFI MailEssentials’ main anti-spam engine, which can be mitigated by white rules to reduce false positives. They can also be used by mail servers like Exchange and Postfix to outright block email if the senders IP address or host name is listed in a DNSBL. Some DNSBLs in anti-spam software also hold a cache of the requests that have been done in memory. All requests are retained in the cache for X days. This will result in faster responses for the items which are found in the cache, since DNS requests may be time consuming. The side effect of this is that the DNSBL feature may return that an IP address is on the DNSBL site, when in reality it has been removed. Finally, I would like to give you a few tips on how to test DNSBlacklists using Nslookup. - Open Command Prompt - Type ‘nslookup’ without the “‘”and press Enter - By default the query type is for A records. You can specify other query types, for example to request TXT records, use ‘set type=txt’ or ‘set q=txt’ - Type the domain that you would like to query (e.g. sorbs.net or bl.spamcop.net ) - When the domain does not exist, you will get a “Non-existent domain” in the response. - When the domain exists, the way the result is displayed will depend on the type of DNS record requested. - For A records, one or more IP addresses may be returned.
<urn:uuid:28cd7620-1348-40a0-a8e5-4cc0336c2b69>
CC-MAIN-2022-40
https://techtalk.gfi.com/dns-blacklists-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00089.warc.gz
en
0.930749
899
3.15625
3
What Is a Wireless Network or Wi-Fi? A wireless network refers to a computer network that makes use of Radio Frequency (RF) connections between nodes in the network. Wireless networks are a popular solution for homes, businesses, and telecommunications networks. It is common for people to wonder “what is a wireless network” because while they exist nearly everywhere people live and work, how they work is often a mystery. Similarly, people often assume that all wireless is Wi-Fi, and many would be surprised to discover that the two are not synonymous. Both use RF, but there are many different types of wireless networks across a range of technologies (Bluetooth, ZigBee, LTE, 5G), while Wi-Fi is specific to the wireless protocol defined by the Institute of Electrical and Electronic Engineers (IEEE) in the 802.11 specification and it’s amendments. Wired vs. Wireless Network: What Is the Difference? At the most obvious, a wireless network keeps devices connected to a network while still allowing them the freedom to move about, unencumbered by wires. A wired network, on the other hand, makes use of cables that connect devices to the network. These devices are often desktop or laptop computers but can also include scanners and point-of-sale machines. There are more subtle technology differences that come in to play between wired and wireless. Most modern wired networks are now “full duplex”, meaning that they can be transmitting/receiving packets in both directions simultaneously. In addition, most wired networks have a dedicated cable that runs to each end user device. In a Wi-Fi network, the medium (the radio frequency being used for the network) is a shared resource, not just for the users of the network, but often for other technologies as well (Wi-Fi operates in what are called ‘shared’ bands, where many different electronic devices are approved to operate). This has several implications: 1) unlike a wired network, wireless can’t both talk and listen at the same time, it is “half duplex” 2) All users are sharing the same space must take turns to talk 3) everyone can ‘hear’ all traffic going on. This has forced Wi-Fi networks to implement various security measures over the years to protect the confidentiality of information passed wirelessly. Types of Wireless Connections In addition to a LAN, there are a few other types of common wireless networks: personal-area network (PAN), metropolitan-area network (MAN), and wide-area network (WAN). A local-area network is a computer network that exists at a single site, such as an office building. It can be used to connect a variety of components, such as computers, printers, and data storage devices. LANs consist of components like switches, access points, routers, firewalls, and Ethernet cables to tie it all together. Wi-Fi is the most commonly known wireless LAN. A personal-area network consists of a network centralized around the devices of a single person in a single location. A PAN could have computers, phones, video game consoles, or other peripheral devices. They are common inside homes and small office buildings. Bluetooth is the most commonly known wireless PAN. A metropolitan-area network is a computer network that spans across a city, small geographical area, or business or college campus. One feature that differentiates a MAN from a LAN is its size. A LAN usually consists of a solitary building or area. A MAN can cover several square miles, depending on the needs of the organization. Large companies, for example, may use a MAN if they have a spacious campus and need to manage key components, such as HVAC and electrical systems. A wide-area network covers a very large area, like an entire city, state, or country. In fact, the internet is a WAN. Like the internet, a WAN can contain smaller networks, including LANs or MANs. Cellular services are the most commonly known wireless WANs. The Components of a Wireless Network Several components make up a wireless network’s topology: - Clients: What we tend to think of as the end user devices are typically called ‘clients’. As the reach of Wi-Fi has expanded, a variety of devices may be using Wi-Fi to connect the network, including phones, tablets, laptops, desktops, and more. This gives users the ability to move about the area without sacrificing their bridge to the network. In some instances, mobility within an office, warehouse, or other work area is necessary. For example, if employees have to use scanners to register packages due to be shipped, a wireless network provides the flexibility they need to freely move about the warehouse. - Access Point (AP): An access point (AP) consists of a Wi-Fi that is advertising a network name (known as a Service Set Identifier, or SSID). Users who connect to this network will typically find their traffic bridged to a local-area network (LAN) wired network (like Ethernet) for communication to the larger network or even the internet. How Does Wi-Fi Work? A Wi-Fi based wireless network sends signals using radio waves (cellular phones and radios also transmit over radio waves, but at different frequencies and modulation). In a typical Wi-Fi network, the AP (Access Point) will advertise the specific network that it offers connectivity to. This is called a Service Set Identifier (SSID) and it is what users see when they look at the list of available networks on their phone or laptops. The AP advertises this by way of transmissions called beacons. The beacon can be thought of as an announcement saying “Hello, I have a network here, if it’s the network you’re looking for, you can join”. A client device receives the beacon transmitted by the AP and converts the RF signal into digital data, then that data is passed along to the device for interpretation. If the user wants to connect to the network, it can send messages to the AP trying to join and (when security is enabled) providing the proper credentials to prove they have the right to join. These processes are known as Association & Authentication. If either of these fail, the device will not successfully join the network and will be unable to further communicate with the AP. Assuming all goes well, we come to the part that is the end user’s ultimate goal: passing data. Data from the client (or from the AP to the client) is converted from digital data into an RF modulated signal and transmitted over the air. When received, this is de-modulated, converted back to digital data, and then forwarded along to its destination (often the internet or a resource on the larger internal network). Wi-Fi communication is only approved to transmit on specific frequencies, in most parts of the world these are the 2.4 GHz and 5 GHz frequency bands, although many countries are now adding 6GHz frequencies as well. These frequency bands are not the same that cellular networks use, so cell phones and Wi-Fi are not in competition for use of the same frequencies. However that does not mean that there are not other technologies that can operate in these bands. In the 2.4GHz band in particular there are many products, including Bluetooth, ZigBee, cordless keyboards, and A/V equipment just to name a small subset that does use the same frequencies and can cause interference. Multiple Devices on One AP If several Wi-Fi devices all want to connect to a network, they can all use the same AP. This offers a convenient solution, making Wi-Fi extensible into environments where coverage for many users is needed. Problems arise, however, if too many people need access at the same time, all needing high levels of bandwidth. For example, if several users are watching high-definition video at the same time, they may experience drops in performance because congestion at RF layer makes it difficult to impossible for the AP to pass all the necessary packets in a timely manner. Wi-Fi Network Standards The networking standard used by wireless architecture is IEEE 802.11. However, this standard is in continual development and new amendments come out regularly. Amendments to the standard are assigned letters, and while many amendments have been released, the most commonly known are: This original amendment added support for the 5 GHz band, allowing transmission up to 54 megabits of data per second. The 802.11a standard makes use of orthogonal frequency-division multiplexing (OFDM). It splits the radio signal into sub-signals before they get to a receiver. 802.11a is an older standard and has been largely replaced by newer technology. 802.11b added faster rates in the 2.4GHz band to the original standard. It can pass up to 11 megabits of data in a second. It uses complementary code keying (CCK) modulation to achieve better speeds. 802.11b is an older standard and has been largely replaced by newer technology. 802.11g standardized the use of OFDM technology used in 802.11a in the 2.4GHz band. It was backwards compatible with both 802.11 and 802.11b. 802.11g is an older standard and has been largely replaced by newer technology. Once the most popular standard 802.11n was the first time a unified specification covered both the 2.4GHz and 5GHz bands. This protocol offers better speed when compared to those that came before it by leveraging the idea of transmitting using multiple antennas simultaneously (usually called Multiple In Multiple Out or MIMO technology). 802.11n is an older standard, but some older devices may still be found in use. 802.11ac was only specified for the 5GHz band. It built upon the mechanisms introduced in 802.11n. While not as revolutionary as 802.11n was, it still extended speeds and capabilities in the 5GHz band. Most devices currently out in the wild are likely 802.11ac devices. 802.11ac technology was released in two main groups, usually called ‘waves’. The primary difference is that Wave 2 devices have a few more technical capabilities when compared to Wave 1, but it is all interoperable. 802.11ax (Wi-Fi 6) 802.11ax (much like 802.11n) unified the specification across all applicable frequency bands. In the name of simplicity, the industry has started to refer to it as Wi-Fi 6. Wi-Fi 6 has expanded the technologies used for modulation to include OFDMA, which allows a certain amount of parallelism to the transmission of packets within the system, making more efficient use of the available spectrum and improving the overall network throughput. Wi-Fi 6 is the latest technology and is what most new devices are shipping with. Other 802.11 Standards There are many more amendments that have been made to the standards over the years (most letters of the alphabet have been used over time). Additional 802.11 standards have focused on things like better security, increased Quality of Service, as well as many other enhancements. Wi-Fi Connection Modes There are multiple Wi-Fi network connection styles, the most prevalent are: infrastructure, ad hoc, and Wi-Fi Direct. Infrastructure mode is the most common style of Wi-Fi, and it is the one people think of when they connect at home or the office. With infrastructure mode, you need an access point that serves as the primary connection device for clients. All other clients in the network (computer, printer, mobile phone, tablet, or other device) connect to an access point to gain access to a wider network. Ad hoc mode is also referred to as peer-to-peer mode because it does not involve an access point, but is instead made up of multiple client devices. The devices, acting as “peers” within the network, connect to each other directly. Wi-Fi Direct is a form of Ad Hoc, but with some additional features and capabilities. Wireless connectivity is provided to compatible devices that need to connect without the use of an access point. Televisions are frequently Wi-Fi Direct compatible, allowing users to send music or images straight from a mobile device to their TV. The term “Wi-Fi hotspot” usually refers to wireless networks placed in public areas, like coffee shops, to allow people to connect to the internet without having to have special credentials. While some are free, others require a fee, particularly those administered by companies that specialize in the provision of hotspots in places like airports or bus terminals. Many cell phones are hotspot-enabled, and users can turn on the feature by contacting their cell service provider. With a hotspot turned on, the user can share their internet connection with someone else, providing them with a password for more secure access. How Fortinet Can Help? Wi-Fi management and security helps prevent unwanted users and data from harming devices connected to your network. With the Fortinet Wireless Access Points, you get a full view of the network and devices that are accessing it. It integrates with the Fortinet Security Fabric, allows for cloud access point management, and comes with a dedicated controller. The FortiGate Integrated Wireless Management system gives you an enhanced security solution that incorporates fewer components, making it a simpler solution. As a next-generation firewall (NGFW), FortiGate provides full network visibility while automating protective measures and detecting and stopping more threats.
<urn:uuid:1fd02b42-7cf2-4376-86e7-1f2a8935f652>
CC-MAIN-2022-40
https://www.fortinet.com/de/resources/cyberglossary/wireless-network
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00089.warc.gz
en
0.950372
2,860
3.3125
3
Global natural disasters have increased more than ten times from 1960 to 2019, quantifying a trend many people have observed. During this period, the number of natural hazards exploded from 39 to 396 events per year. In addition to the tragic loss of human life, these events carry billion-dollar economic losses that local and federal governments must cover. As a result, regions feel the effects of even one catastrophic climate event long after it ends. Extreme weather events are the primary driver of the surge in natural disasters. Two other global macro trends that raise the frequency of extreme weather impacting humans are urbanization and living near the coast. As humans continue to live in larger groups, increasing energy usage and human-caused temperature increase aggressively with population growth. These shifts also lead to new natural disaster trends, to which a rapid response is essential. Before defining how to react to a natural disaster effectively, reviewing the various types of high-impact weather events is beneficial. It is also valuable to describe regions where every kind of event occurs, and how each affects the people and society who reside there. What Are Natural Disasters? The term “natural disaster” refers simply to severe weather and natural events above historical norms. These events include single and multi-hazards across the temperature and precipitation spectrum. Natural disasters include wildfires and drought, severe thunderstorms, hail and tornadoes, tropical cyclones, floods and landslides, winter storms, and hurricanes. Wildfires and Drought The increased frequency of wildfires indicates climate change. In the United States, the southwest and western areas are most susceptible to these incidents. This is due to high temperatures and lower relative precipitation amounts compared to other regions. The lowest average rainfall occurs in Nevada, with states like Arizona, New Mexico, Colorado, and California present among the driest 15 states, which all reside west of the Mississippi River. These states also have some of the highest average temperatures, paced by Death Valley, CA, creating the ideal scenario for fires to rage. Severe Thunderstorms, Hail, Hurricanes, and Tornadoes While dry, hot climates promote wildfire development, collisions between dry, cold air masses and warm, moist ones elevate atmospheric energy and create severe thunderstorms. These phenomena are also most common in North America, with the US and Canada experiencing the most and next-most of these hazards. Cool, dry Canadian air comes in contact with warm, moist air from the Gulf of Mexico over the Central Plains. These do not have topological features (like mountains) to counteract rapid air movements. When the warm air updraft and cold air downdraft air masses rotate when coming together, it is called a supercell thunderstorm. These events lead to lightning, hail, strong winds, and flash flooding. Wind shear occurs if these air currents move at different speeds or directions at various distances from the ground. If the wind shear is strong enough to reach the ground, the spinning air near the surface causes a tornado. Hurricanes are collections of these thunderstorms stemming from warm ocean water with low-altitude wind shear. One hurricane can contain multiple tornadoes and has wind energy on par with the total capacity of global electricity generation. Floods and Landslides Floods can result from prolonged periods of extreme rainfall or rapid melting of ice and snow. In addition, climate change is worsening the frequency and intensity of floods through rising global temperatures. Higher temperatures increase the rate of evaporation and cloud formation and enable air to hold more moisture. These rising temperatures also accelerate the polar ice cap melting, adding to ocean water levels and further fueling extreme weather. Because humidity and proximity to moisture are critical to flooding, areas along the coast have a higher risk of flooding. The Asia-Pacific region is closest to the ocean with a high population density. It is logical that the region has the most population exposed to flooding risks. In addition, mountains could curb tornado formation. They also promote landslides in areas of steep terrain like the mountains of Asia, Central, and South America. While a warming world may seem to decrease winter natural disasters, warming air still increases the atmospheric energy able to produce heavy snow. In addition, warming Arctic air can destabilize the polar vortex due to sudden stratospheric warming, driving frigid polar air south. This trend is especially pervasive in the US. Natural Disaster Impact Extreme weather like those described above has wide-reaching impacts. For example, global climate-related deaths averaged over 500,0000 in the 1920s. These were between 40,000-80,000 deaths per year in the decade ending in 2019. Still, tens of thousands lose their lives yearly, and cleanup costs billions. For example, in the US, natural disaster cleanup cost the government $152.6 billion in 2021, representing 0.66% of its entire 2021 GDP. Costs include damage to public, residential, and commercial buildings, business interruptions, infrastructure damage, agricultural impacts, and mental health support. Loss of life and physical assets are apparent. Two additional follow-on effects of natural disasters are population displacement and economic activity disruption. For example, many people affected by a weather event leave their homes for a period, some permanently. And infrastructure damage, power loss, and business building destruction can disrupt economic activity, leading to a local recession. All of these events challenge utility access and public safety and security. The Role of Rugged Devices in Natural Disaster Trends By design, rugged devices can withstand harsh climates and extreme temperatures. They are durable, long-living, and portable. This equipment is compatible with major operating systems, putting the power of 5G at the center of a disaster zone – instantly. In addition, rugged devices are cost-effective, offering a low entry barrier to developing regions worldwide. A prime example of this type of device is the ATT FirstNet certified rugged computer. This is one of the most comprehensive technologies in the rugged industry. Implementing the FirstNet certification network offers first responders guaranteed priority and preemption when they’re needed. Rugged tablets like the Getac A140 benefit first responders by providing real-time data, saving precious seconds. They can track resources, coordinate response efforts, and provide connection during a climate event’s critical first moments. Additional rugged tools vital during natural disaster response include the B360 rugged computer and the X500 & X600 rugged command center servers. In addition, these devices deliver performance and mounting flexibility to put the internet where responders need it during a rescue. For example, first responders who know the location of supplies or reinforcements can influence their work in the event of a disaster. Or, handing the response team a satellite picture of a potential burn zone may impact how they protect the land during a wildfire. First responders who can alert backup response personnel and send preliminary images are some of the ways rugged devices can be of help. Recent natural disaster trends extremely affect developing countries and are often erratic. While the world works on ways to prevent and mitigate disaster losses, the number of deaths is sharply decreasing despite the events’ growing frequency and severity. Rugged devices are critical tools that first responders use to provide instant information and communication when addressing natural disaster trends. Furthermore, data collection and analysis advancements enable public safety officials to begin to predict natural disasters before they occur. Technologies such as seismograph readings and warning systems alert them to the potential of an earthquake. Visit Getac Select to learn more about rugged devices for natural disaster trends response. There is also additional information for convenient pre-configured devices, simplified procurement, and peace of mind. Partnering with the user team makes it easy for first responders to add the transformational features of rugged devices to their emergency response effort.
<urn:uuid:10fa26f6-4670-4b31-828d-56cff6846333>
CC-MAIN-2022-40
https://www.getac.com/intl/blog/natural-disaster-trends-response/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00089.warc.gz
en
0.925974
1,583
4.15625
4
Tuesday, September 27, 2022 Published 7 Months Ago on Saturday, Feb 26 2022 By Karim Husami Many countries are setting their own targets to create a carbon pollution-free power sector, as in decarbonization, an essential element in countries’ goals to reduce emissions from 50 to 52 percent by 2030 to achieve net-zero emissions by 2050. In accordance, McKinsey’s research found that “early decarbonization of the power sector is a critical factor across many of the pathways to a decarbonized economy. Renewable technologies, such as solar and wind energies, are already deemed cost-competitive with coal and gas across most US markets. In addition, decarbonizing electricity is essential to enabling decarbonization in other sectors, such as transportation (electric vehicles) and buildings (electric heating).” One of the reports published by the World Bank specifies steps to help put countries on a pathway to decarbonizing their development smoothly and systematically. The reports acknowledge that solutions are affordable, but only if governments take action today. However, the cost will rise as long as the action is delayed. In this case, the successive generation will be the party forced to deal with the detrimental effect of previous governmental disregard to the issue. Meanwhile, the data provided from the latest Intergovernmental Panel on Climate Change report highlights that delaying actions 15 more years until 2030 will increase costs by an average of 50 percent through 2050 to keep temperatures from rising less than 2°C. Decarbonization meaning lies in the reduction of carbon, which signifies a decarbonized economy. In other words, to transfer the economy into a sustainable one, countries from all around the world will have to direct their energy strategies to focus on sustainably reducing and neutralizing the emissions of carbon dioxide (CO₂). World Bank Group Vice President and Special Envoy for Climate Change Rachel Kyte said that “choices made today can lock in emissions trajectories for years to come and leave communities vulnerable to climate impacts.” “To reach zero net emissions before the end of this century, the global economy needs to be overhauled. We at the World Bank Group are increasing our focus on the policy options,” she noted. Governments can start altering investments and mindsets toward low-carbon growth by regulating prices through a policy package that provides reasons to guarantee low-carbon growth plans are implemented and projects financed. A price can be put on carbon through a carbon tax, or a cap-and-trade system addresses a market failure to combine the cost of the damage caused by greenhouse gas emissions. It is perceived as an efficient way to raise revenue while encouraging lower emissions, as well as intensifying difficulties in evading other taxes. Although those steps are essential and carbon pricing is necessary, it does not make any difference on its own without the proper implementation of paired policies. On another note, the automotive industry will comprise 66 percent of emissions that come out of their material production at no extra cost by 2030—if industry participants work together and start now. In parallel, the automotive sector is crucial to reaching net-zero global emissions by 2050. Several original equipment manufacturers (OEMs) are setting extreme decarbonization targets to meet this challenge. Car decarbonization and Industrial decarbonization can be achieved, and the target is not far-fetched. Therefore, since an automobile generates 65 to 80 percent of tailpipe emissions, and indirect emissions are from fuel supply, the industry focused on electrifying powertrains. However, players must turn their attention to material emissions as well to reach the full potential of automotive decarbonization and achieve the zero-carbon car industry. The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:12000ecd-9e93-456b-9f00-8ba02268f84a>
CC-MAIN-2022-40
https://insidetelecom.com/how-to-achieve-decarbonization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00089.warc.gz
en
0.937381
871
3.4375
3
The world is becoming bandwidth-hungry at an exponential pace – be it the bandwidth consumption at homes, driven by new entertainment forms, smart home configurations, COVID-19-induced home office setups or in the enterprise space, driven by the proliferation of IOT and surge of Industry 4.0. This is forcing countries across the globe to accelerate the shift towards fiber networks and 5G. Upgrading existing networks and planning the “Networks of the Future” that are flexible, scalable, reliable, and sustainable have become key priorities for Telecom companies across the world. But what happens to the older equipment, once newer networks start taking over? As networks have become more complex, planning newer network deployments, decommissioning, and migration have become the need of the hour. So, what is Network Decommissioning? Network Decommissioning is the process of shutting down and removal of old and technologically obsolete networks, including all the network equipment, cables, switches, POTS lines, etc. his is done across both wireline (telephone networks, cable television or Internet access, fiber optic communication, etc.) as well as wireless forms of networks (cell phone networks, wireless local area networks (WLANs), wireless sensor networks, satellite communication networks, etc.). a. Decommissioning Wireline Networks Wireline networks transmit information from source to the destination via filaments. As an alternative, fiber networks are gaining popularity as mediums of transmission. The data delivery is more seamless in the case of fiber networks – both in terms of quantity and the distance of transmission, as compared to older copper networks. Additionally, these copper networks are difficult to maintain, and the continuous support to sustain these aging networks is hard and expensive over time. Thus, the uptake of fiber is leading to copper-switch offs and provides a significant opportunity for decommissioning. This is being adopted by Telecom operators on a global scale. b. Decommissioning Wireless Networks The evolution of wireless networks from 2G, 3G, 4G to 5G makes it all the more essential to replace the voice networks to keep up with the rising bandwidth demand. There are networks that have been in existence for over a decade and are now obsolete as their electronic modes fail to support the scale. Older wireless networks like 2G and 3G are now having to adopt newer nodes belonging to 3G and 4G networks to keep up with the demand for faster networks with lower latency at affordable costs. Moreover, with sustainability dominating conversations across boardrooms, parliaments, and conferences, it becomes crucial to dismiss legacy equipment and make the switch to newer, greener technology. How To Plan Network Decommissioning The process of network decommissioning begins with an in-depth analysis of the existing network. Subsequently, a comprehensive execution plan is developed. But before performing cancellations, pre-decommissioning checks and configurations are carried out. This process includes steps such as: - Identifying the servers for decommissioning - Performing an audit led by the Service Provider to identify the existing set of regulations - Listing all the acquired software licenses - Cancelling any existing maintenance contracts, prior to indulging in decommissioning activities - Backing up all available data - Disconnecting the existing server from the networks - Ensuring software-based data deletions - Creating new data files post the process - Retiring the old servers completely Once the network has been successfully decommissioned, Telecom operators need to upscale the network design to enhance performance and meet infrastructure needs. This is taken care of in the next step, which is Network Migration. Network Migration is the process of moving/transferring the data and applications from a legacy network to an upgraded network system. Network Migration becomes imperative because: - New cabling options need to be looked into for an optimized cable architecture - High speed bandwidth requirements need to be supported through various cable options like fiber optic and twisted copper - Chances of possible connection options for an existing node need to be increased Why Decommission and Migrate? The rate of growth of the network traffic has been ranging between 50%-150% across the globe. Thus, to support the massive current and future demand, technical upgrade of the existing networks is not only vital but has become imperative. Clients today, are enthusiastic and eager to complete their Decommissioning and Migration journeys and optimize their current networks. Also, it is safe to say that the cost incurred by Decommissioning of legacy equipment is not too high when weighed against the profits that will be achieved because of the transition, making it both profitable and sustainable in the long run. In addition, market research shows that the fiber optic networks occupy 15% of the space captured by their copper counterparts. This results in minimizing the usage of technology access equipment. Based on conversations with clients on Network Decommissioning, and our analysis, we know that the copper switch-off can save anywhere between 45-65% of the overall energy cost. Some added benefits would be optimal utilization of energy and space. Telecom operators have played an indispensable role in decommissioning the older networks, both from wired and wireless perspectives, and helping build a green, sustainable, future-proof network. Global support is pouring in for this green network initiative, which ensures optimized balance of hardware, software, and reduced energy usage. The green network standards make it a mandate to reduce carbon footprints and balance the future networks on an even ratio of energy efficiency vs. energy consumption. It is crucial that organizations constantly remain at the top of their tech game and leverage the latest changes in the industry. To achieve this in the case of Network Decommissioning, telecom operators, equipment manufacturers, and System Integrators (SI) need to collaborate and work towards building and scaling up the network of the future. And this will define the next era of growth for the Telecom industry. Download the full whitepaper here: This blog was originally posted on Zinnov’s website. Authors: Abhinaba Chatterjee, Senior Manager – Offerings, Cyient; Vijaykumar Hegde, Principal, Zinnov; Manali De Bhaumik, Engagement Manager, Zinnov
<urn:uuid:d8d91b07-7559-4679-9da2-d1753ca72678>
CC-MAIN-2022-40
https://www.dailyhostnews.com/all-you-need-to-know-about-network-decommissioning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00089.warc.gz
en
0.932984
1,300
3.125
3
With the onslaught of COVID-19—a pandemic unlike any other the world has seen since the Spanish Flu of 1918—everyone’s lives have changed drastically. This change also happened very quickly so, without much time to mull over educational details, all learning was forced online as everyone had to self-isolate in their homes, effective immediately. The traditional classroom setting, sadly, has become a thing of the past out of sheer necessity. So, what does this mean for education? Well, everything is now online and electronic learning (e-learning) is the new normal (at least for now and the foreseeable future). The Rise of E-Learning Many faculty members are unfamiliar with online teaching practices (even though it has become popular in certain circles) and feel as if their jobs will be threatened by this new way of teaching. Of course, there’s no choice in the matter since the spread of the coronavirus must be combatted but that doesn’t mean that a new way of learning and living isn’t taking its toll on teachers as well as students. Of course, cybersecurity teachers and students are more computer savvy than most and will likely adapt more readily to this shift. Still, the allure of the traditional classroom is sorely missed by many. Sitting amongst one’s peers in front of a blackboard with a teacher lecturing front and center is what we all imagine when we think of an educational setting. Now, sitting in front of a screen is the new standard. According to Times Higher Education, Universities have quickly adapted and gone to great lengths to embrace this new kind of e-learning. For instance: “those who had experience with online teaching and tools were proactive in helping colleagues adapt their courses. While some are no doubt concerned about being able to achieve their intended learning outcomes, they are also excited about the technical barriers they have overcome and all that they have learned.” By learning to creatively use technology in the online classroom, certain teachers will be forever changed for the better. With positive reinforcement, a well-scaffolded lesson plan, and a little patience, an online learning experience can be an incredibly positive one. E-learning can actually promote more interactivity among students and provide opportunities that may not have been possible before. Education and Cybersecurity This is an incredibly exciting time for students learning cybersecurity and for that adept at information technology with advanced computer skills. Because e-learning has quickly taken over in institutes of higher education and their focus will need to “shift from basic training on tools to more advanced training incorporating course design and assessment of learning,” students with advanced knowledge of computers will find this incredibly stimulating and rewarding. With the improved use of online tools, students, as well as faculty, will be forced to learn how to solve technology-related teaching challenges. This should be exciting for anyone in the sphere of IT and cybersecurity. As Cybint is a cyber education company that offers programs such as Cybersquad for K12 along with bootcamps, they are burgeoning with information on how to make the field of cybersecurity accessible to everyone, especially now with online training.
<urn:uuid:1e699508-b315-4559-a56e-41a27f523e67>
CC-MAIN-2022-40
https://www.cybintsolutions.com/e-learning-how-the-coronavirus-has-changed-education-forever/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00089.warc.gz
en
0.974859
646
3
3
Estimated Read Time: Six Minutes Prevent Phishing By Enhancing Your Email Security For many years, email has been used as a popular communication tool, with little understanding of its IT security pitfalls. When we use email, we trust that the person we are corresponding with is who they say they are, with no mechanisms in place to ensure that is actually the case. Along with its widespread use, email has also been used for widespread abuse—including using it for phishing. What is Phishing? Here are Some Examples: Phishing (pronounced fishing) is a term that describes an attempt to trick unsuspecting email recipients into taking an action such as opening an attachment, clicking on a link, or redirecting them to an infected website that results in a malicious outcome. The sender of the emails entice the recipients to click through by ‘baiting’ them. For instance, the email could contain a fictitious statement telling the recipient ‘they have received a parking ticket, and should click here to see a photo of your vehicle parked illegally’. These types of emails are sent to a wide audience of random people, in hopes that some of them will take the bait and be caught—trusting the fraudulent sender with sensitive information. Spear-phishing is a more targeted form of phishing, where fraudulent emails are sent to very specific people in specific roles to try to get them to take the bait. For instance, sending a bogus link for banking information to an Accounts Payable Officer at a specific company in the hopes that they will click on it would be considered spear-phishing. These phishing attempts take a little more effort to set up, but can be more dangerous since the email seems relevant and personalized, making the recipient more likely to trust it and take the bait. Whaling is another targeted approach of phishing, but their mark is the biggest phish of all. Whaling emails are sent to senior executives such as CEOs and Directors. Generally, the emails contain engaging content for those recipients, such as large financial transactions, customer complaints, or fake articles from news media. Is Email Security Achievable? It is generally accepted by reasonable people that if an email is received from firstname.lastname@example.org, the email is bogus. Those same reasonable people, however, are much more likely to accept and open an email that appears to be coming from an email address from a government, bank, e-commerce company, or utility company, just because the email looks authentic. Email was never designed as a secure system, and up until recently, any form of IT security in email only protected us from malicious software or viruses. Also, an email could be blocked if its network address or specific domain was identified as suspicious by using blocklists or reputation-based lists to examine mail. These list-based methods rely on the lists being updated in order to be effective, and only block mail that is sent to a large number of recipients. Individually, users typically validate email they receive by examining the to and from addresses, and if it looks right, they will open an email and any attachments—trusting that somehow, they are protected from viruses or malicious emails. Phishing, spear-phishing, and whaling have now become mainstream. The common denominator in all of these types of fraudulent emails is that they target people and organizations with carefully crafted emails often so cleverly disguised that it may be next to impossible for the average user to be able to spot them. That is why so many people fall victim to phishing, exposing confidential information, making the company network vulnerable, and sometimes going as far as transferring money from a business bank account to a third-party account. Using SPF, DKIM and DMARC to Prevent Phishing There is hope when it comes to improving email security and reducing the impact of phishing and other malicious email-borne threats. Three specific defense mechanisms available to help organizations are: - Sender Protection Framework (SPF) - Domain Keys Identified Mail (DKIM) - Domain Message Authentication Reporting and Conformance (DMARC) Sender Protection Framework (SPF) is a basic protection which is very straightforward for an organization to implement. It tells the rest of the world “if you get an email from us, it is only legitimate if it came from these servers”, and lists the servers that the mail should have come from. In this way, recipient organizations can direct their mail servers to reject any mail that fails an SPF check. In other words, if the mail did not come from a listed server, treat it as bogus. Domain Keys Identified Mail (DKIM) provides more advanced protection by digitally signing outgoing mail with an encrypted signature from your organization. Recipient mail servers are then configured to reject mail that fails a DKIM test, meaning if it does not have the appropriate signature on it, it is fake. Domain Message Authentication Reporting and Conformance (DMARC) takes both of the other techniques a step further by allowing your organization to inform other organizations how to treat mail that has failed SPF and DKIM checks. It also asks recipient organizations to report back to you to let you know that someone is spoofing your email addresses. Using this technology can help you decide whether to pursue an additional investigation or even legal action against offenders and to stop the fraudulent use of your email domain. The use of SPF, DKIM, and DMARC is gaining traction and growing exponentially throughout the world. Cloud-based service such as Office 365, Google Mail, and others are building in support for them. Add-ons are also available to deploy these tools in on-premise environments. Enhance Your Email Security with Bulletproof It’s time to put effective measures in place to protect your organization from threats such as phishing emails. We can help you understand and deploy these protections in your environment to enhance your email security. Download our Security Aware Data Sheet to learn more about how to decrease phishing rates among employees.
<urn:uuid:15541ca2-07a9-48ca-ad6f-f113ea7d43f8>
CC-MAIN-2022-40
https://bulletproofsi.com/blog/prevent-phishing-by-enhancing-your-email-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00089.warc.gz
en
0.960614
1,240
3.140625
3
German researchers and pediatric specialists from Caritas-Krankenhaus Bad Mergentheim Hospital-Germany and the Kinderkardiologie und Erwachsene Mit Angeborenen Herzfehlern Instutute-Germany have in a new study involving a case study reported that autoantibodies induced as result of COVID mRNA vaccines in children are a risk factor for Multisystem-Inflammatory Syndrome in Children (MIS-C). The study was published in the peer reviewed journal: Vaccines. https://www.mdpi.com/2076-393X/9/11/1353 Multisystem inflammatory syndrome (MIS) is a new systemic inflammatory acute onset disease that mainly affects children (MIS-C) and, at a lesser frequency, adults. The disease usually occurs in children 3–6 weeks after acute SARS-CoV infection. In Februrary of 2021, case definition & guidelines for data collection, analysis, and presentation of immunization safety data from a Brighton Collaboration was published in the Journal Vaccine . However, in July 2021 the first three cases with multisystemic inflammatory syndrome in adults (MIS-A) after SARS-CoV-2 vaccination were published . The vaccine was not specified in this publication, but two of these patients had COVID-19 disease, shortly before vaccination (34 days and 43 days), and the disease started with a short delay 4 days after the second vaccine or 19 days after the first vaccine. All patients survived and were treated with Methylprednisolone and antibiotics. On 2 August 2021, the Danish Medicines Agency investigated a case of inflammatory conditions, reported after COVID-19 vaccination, in a 17-year-old boy, after receiving COVID-19 vaccine from Pfizer/BionTech (BNT162b2) (Mainz, Germany) (https://www.ema.europa.eu/en/news/meeting-highlights-pharmacovigilance-risk-assessment-committee-prac-30-august-2-september-2021, accessed on 3 September 2021, published by European Medicines Agency, Domenico Scarlattilaan 6, 1083 HS Amsterdam, The Netherlands). Our first case report demonstrates a severe inflammatory disease in an 18-year-old boy with high fever, as well as pericardial and pleural effusions, ten weeks after the second COVID-19 vaccine from Pfizer/BionTech (BNT162b2), who fulfills the published MIS-C Level 1 Criteria of Diagnostic Certainty . Myocarditis became a hallmark for complications not only in COVID-19 but also as an unwanted side effect of the coronavirus mRNA vaccination . A recent review summarizes and evaluates the available evidence on the pathogenesis, diagnosis, and treatment of myocarditis and inflammatory cardiomyopathy, with a special focus on virus induced myocarditis . Beta adrenoreceptor autoantibodies seem to be important for pathophysiology with therapeutic implications . We measured elevated autoantibodies against G-protein-coupled receptors in children with multisystem inflammatory syndrome (MIS-C) after a natural SARS-CoV-2 infection . The data are in accordance with multiple elevated autoantibodies after SARS-CoV-2 infections in adults [7,8]. We now publish these autoantibodies in an 18-year-old boy with severe inflammatory disease after coronavirus mRNA vaccination and prove the release of these autoantibodies against G-protein-coupled receptors in a girl with Hashimoto thyroiditis after coronavirus mRNA vaccination (Pfizer-BioNTech BNT162b2). The anti-adrenergic receptors (α1, α2, β1, β2), anti-muscarinic receptors (M1- M5), anti-endothelin receptor type A, and anti-angiotensin II type 1 receptor autoantibodies were measured in serum samples using a sandwich ELISA kit (CellTrend GmbH Luckenwalde, Germany). The microtiter 96-well polystyrene plates were coated with G-protein-coupled receptors. To maintain the conformational epitopes of the receptor, 1 mM calcium chloride was added to every buffer. Duplicate samples of a 1:100 serum dilution were incubated at 4 °C for 2 h. After washing steps, plates were incubated for 60 min with a 1:20,000 dilution of horseradish peroxidase-labeled goat anti-human IgG, used for detection. In order to obtain a standard curve, plates were incubated with test serum from an anti-G-protein-coupled receptors autoantibody positive index patient. The ELISAs were validated, according to the national standards (DIN EN ISO 138485:2016) and FDA’s “Guidance for industry: Bioanalytical method validation”. After the Danish case (https://www.ema.europa.eu/en/news/meeting-highlights-pharmacovigilance-risk-assessment-committee-prac-30-august-2-september-2021; accessed on 3 September 2021), our first case is the second published case with a multisystem inflammatory syndrome (MIS-C) in an adolescent after SARS-CoV-2 vaccination, who fulfills the published level 1 criteria for a definitive disease: age < 21 years, fever > 3 consecutive days, pericardial and pleural effusions, weakness, elevated CRP/NT-BNP/Troponin T/D-dimere, cardiac involvement, SARS-CoV-2 antibodies, time frame < 12 weeks. As recently shown in children with immunological complications after SARS-CoV-2 infections , he had a comparable pattern of increased autoantibodies against G-protein-coupled receptors (Table 1, Figure 1). Most of all, anti-β adrenergic receptor autoantibodies have been shown to be elevated in adults with heart failure and children with cardiomyopathies . However, anti-α1 and anti-muscarinergic receptor 3 + 4 autoantobodies were more elevated in four children after SARS-CoV-2 infection, compared to this case. The number of cases was too low to be able to draw general conclusions from this. Moreover, we do not know the baseline values (before vaccination) in the boy who had pre-existing illness. To prove our hypothesis of BNT162b2 vaccination-induced autoantibody release in children, we measured the autoantibody against G-protein-coupled receptors in a girl with Hashimoto thyroiditis after vaccination (Case 2, Figure 2). We found a uniform increase of all these autoantibodies after the first vaccination, which returned to baseline six weeks after the second vaccination, but thyroid peroxidase autoantibodies further increased. She had no clinical side effects, but pacemaker monitoring showed an impairment of her arrhythmia, known of since early childhood and currently successfully treated with the dual chamber pacemaker. Moreover, while TPO antibodies significantly increased after vaccination, we had to increase her thyroxin treatment to normalize the elevated TSH values. With these cases, we try to connect knowledge about the potential of SARS-CoV-2 to trigger autoimmunity with known cardiovascular complications of the disease and vaccination. Autoinflammation may explain the impact of SARS-CoV-2 infections on Hashimoto thyroiditis , as well as arrhythmogenesis and myocarditis . At least, it seems not to be the whole virus but the spike protein that induces autoimmunity, the most imminent danger for children in this pandemic. Together with our published data about elevated functional autoantibodies against G-protein-coupled receptors in children with MIS-C, the current case reports after vaccination show a comparable effect on the network of autoantibodies, and different autoantibodies uniformly shift to enhanced blood levels after the immunological response to the vaccine, comparable to our data after the natural infection . In both cases, the spike protein is the target of the immune response. We tried to explain the effect of the spike protein on multiple autoantibody pathways, with a blockade of the so called cholinergic anti-inflammatory pathway . As recently published, COVID-19-related myocarditis may be related to the cholinergic anti-inflammatory pathway . The nicotinergic acetyl-choline receptor alpha7 subunit (α7nAChR) is cardinal in this pathway and is mainly expressed on the membrane of immune cells . There is some evidence of a molecular mimicry of nicotinergic acetyl-choline receptor alpha7 subunit and the spike protein , which could explain the proinflammatory effect by a blockade of the anti-inflammatory pathway that is neuronally controlled by the vagus nerve. We investigated the impact of the autonomic nervous system on SARS-CoV-2 infection by using an analysis of heart rate variability and found very low heart rate variabilities during the acute disease and the multisystemic inflammatory syndrome (MIS-C) [16,17], which indicates low vagus activity. A low heart rate variability in the elderly and patients with chronic diseases may explain the higher risk of death from SARS-CoV-2 infections. We have developed an algorithm, based on a 24 h analysis of heart rate variability, to estimate the risk of death from COVID-19 . The publication of the current cases is very important, in order to make doctors aware vaccination complications, such as MIS-C, if therapy with intravenous immunoglobulins can be initiated at an early stage. This awareness is essential when vaccinating children and adolescents who are not at increased risk of death from COVID-19. When informing the parents, we should refrain from claiming that the vaccination protects against MIS-C, as shown in the first case. Laboratory tests of autoantibodies, against G-protein-coupled receptors, may help to understand pathophysiology, but these tests are not conclusive for diagnosis of MIS-C, if the girl with Hashimoto disease had a comparable increase of the antibodies but no clinical signs of MIS-C. We would like to emphasize, again, that the sample size is too small, as well as lacking in proper methodology (levels of autoantibodies before vaccination in case 1) to arrive at a final conclusion. However, our data has encouraged us to prospectively investigate the formation of autoantibodies against G-protein-coupled receptors after various SARS-CoV-2 vaccinations. Many countries are in the process of approving the vaccination for all children because the health policy goal of achieving herd immunity seems to only be achieved by immunizing the population as completely as possible. Although the complications of the vaccination are apparently rare, we may focus on the importance of management and therapy of these rare cases if we want to maintain public acceptance of the SARS-CoV-2 vaccination. We are aware that a misattribution of MIS-C as a severe complication of coronavirus vaccination can lead to increased vaccine hesitancy and blunt the global COVID-19 vaccination drive. However, the pediatric population is at a higher risk for MIS-C and a very low risk for COVID-19 mortality. At the currently high infection rate, the vaccination decision in childhood should be made dependent on the risk assessment of autoinflammatory diseases of COVID-19, compared with the vaccination. Recent observational data from Clalit Health Services, the largest health care organization in Israel, shows a lower risk of myocarditis after BNT162b2mRNA vaccination (risk ratio, 3.24; confidence interval, 1.55 to 12.44; risk difference, 2.7 events per 100,000 persons), compared to COVID-19 (18.28; 95% CI, 3.95 to 25—12; risk difference , 0 events per 100,000 persons) . It has been postulated and shown in adults that MIS may occur after SARS-CoV-2 vaccination (MIS-V).
<urn:uuid:544dbaf9-53ca-4a01-a669-99e6d20d81dc>
CC-MAIN-2022-40
https://debuglies.com/2021/11/22/autoantibody-release-in-children-after-corona-virus-mrna-vaccination-is-a-risk-factor-of-multisystem-inflammatory-syndrome/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00089.warc.gz
en
0.915306
2,641
2.828125
3
In the digital age we currently live in, topics such as Personal Data Collection are becoming increasingly important. It’s no longer the case that large corporations use your personal information just to sell you a product. Today, Personal Information has become a security issue. Cybercriminals have found that selling Personally Identifiable Information (PII) is a very profitable market, and as a direct consequence, governments around the world have created increasingly stringent regulations for the handling of personal information. In this article, I’m going to explore everything related to Personal Identifiable Information, from its definition to some of the regulations that currently seek to protect the privacy of individuals. What is PII? Everything related to privacy has always been controversial, so it should come as no surprise that the very definition of the term "PII" is not universally accepted in all jurisdictions. For example, according to the United States Department of Defense (DoD), PII is defined as the "Information used to distinguish or trace an individual's identity ..." Moreover, the DoD goes further by stating that "PII includes any information that is linked or linkable to a specified individual, alone, or when combined with other personal or identifying information. " On the other hand, laws such as the General Data Protection Regulation (GDPR) of the European Union use the term "Personal Data" instead of PII to describe "any piece of information that relates to an identifiable person" In simple terms, it can be said that Personally Identifiable Information (PII) is any type of data that can be used alone or combined with other relevant information to identify a specific individual. What is Considered PII? Since there is no unanimous opinion about what PII is, there are different opinions about what can be considered as Personal Identifiable Information. A classic example has to do with the IP address of a user. In the European Union, the General Data Protection Regulation (GDPR) adopted in 2018 clearly establishes that the IP address of a subscriber can be classified as "personal data". However, in many countries and even in some US states, the IP address may not necessarily be considered part of the PII. Simply put, what is considered PII depends on who you ask, or rather, where you do business. Despite the discrepancies between the laws of different countries and regulatory entities, in general, the following is considered sensitive PII: Social Security Number (SSN) Driver’s license / National Identity Card Physical mailing address Criminal or employment history Credit Card information As mentioned above, the EU GDPR is much more inclusive with respect to what is considered sensitive Personal Data and includes a huge number of additional elements such as: Any online identifier (including but not limited to IP address, Login IDs, Social Media Posts, customer loyalty histories, cookie identifiers, etc) Biometric data (including but not limited to fingerprints, voiceprints, photographs, video footage, etc) Any factor specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of the individual Another recent PII legislation, the California Consumer Privacy Act (CCPA), goes even further than the GDPR by including additional data such as: Online Account Names Records of personal property Purchased products and services Purchases or consuming tendencies Information regarding user’s interaction with websites Audio, electronic, visual, thermal, olfactory, or similar information Education information that is not publicly available Inferred consumer profile including consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes Other types of information that could be used to indirectly identify an individual are also mentioned in most current legislations. Examples of this type of information are: Date of birth Place of birth Why is PII Important? Keeping users' personal information safe is a matter of utmost importance. Not only can this information put the person involved at risk, but also the entire organization where that person works at. Just stop for a moment to think what cybercriminals can do with this type of information if it is made available to them: Identity theft to carry out criminal acts Bribes or other types of extortion both to the individual and to the company in which she/he works Creation of false identity documents Theft of funds deposited in banks or other financial institutions Access to classified information through the use of biometrics data What is described above only reflects some of the disastrous consequences for those who are the victim of information theft. It is for this reason that PII regulations take the security of private information so seriously. Protecting Personally Identifiable Information (PII) is an obligation of every company that collects this type of data. As a result, huge fines are stipulated for those who fail to comply with these guidelines. Taking as an example EU’s GDPR, especially severe violations of the regulation can lead to fines up to 20 million euros or up to 4% of the company’s total global turnover of the preceding fiscal year. Regulator Compliance and PII The increasing number of regulations surrounding the use and protection of personal information is a trend that’s here to stay. I’ve already mentioned two of the most relevant regulations, the General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). However, there are many others, such as the Singapore Personal Data Protection Act (PDPA), Brazil's General Personal Data Protection Law (LGPD), the Health Insurance Portability and Accountability Act (HIPAA), as well as other data protection acts enacted by Australia, Canada, the United Kingdom, New Zealand, and Switzerland just to mention a few. What is clear from seeing how these regulations grow both in number and scope, is that achieving regulatory compliance should be a huge priority to any online business. The question is, how can this goal be achieved? For your reference, below I’ve included a PII compliance checklist to give you an idea of the actions necessary to avoid fees for failure to meet these obligations. Identify any data within your organization that could be considered PII, and ensure it is stored in a safe manner If your company offers products or services globally, consider limiting web access to users from jurisdictions where your company does not comply with relevant regulations. Minimize the collection and retention of Personal Data, since this reduces the risk of violating any of the current or future laws Anonymise PII data whether possible Define clear policies and procedures concerning how to handle PII. Once these policies are in use, keep them updated Encrypt databases and/or any other environment where PII is stored Keep your staff aware of the importance of keeping sensitive information secure Use access control policies to limit who can access this type of information Improve and keep data transmission mechanisms updated Perform PII audits on a regular basis Make it easy for users to review, modify, or request the deletion of the data collected about them For further reference, as well as PII compliance requirements, a valid option is visiting GDPR’s checklist for data controllers or NIST’s Guide to Protecting the Confidentiality of Personally Identifiable Information (PII) How to Identify PII Arguably, once the general concept of PII is clear, identifying this type of information is relatively easy, as it only takes common sense to recognize an individual's personal information. However, "common sense" can be somewhat misleading, especially when different regulations have different views of what should be considered private information. In this sense, it is highly recommended to use specialized PII scanning & discovery tools when auditing your company's data, as they can greatly facilitate this delicate task. These tools not only automate a good part of the process but also help to comply with the different regulations, both by identifying possible vulnerabilities and offering suggestions regarding the classification of said information. Tools for Identifying PII As mentioned in the previous section, PII Scanning & Discovery Tools are very valuable pieces of software that help detect problems in handling sensitive information. Among its main benefits are: Comprehensive PII Data Discovery. Automatically scans cloud storage, workstations, local files shares, mobile devices, and more, in order to locate PII data such as personal, finance, health, and other sensitive information. Risk assessment. Identify potential risks in all digital assets that are not properly protected or reside outside a secure environment. Facilitates data management. Filter different types of PII data, organize it according to organization rules/needs, and export it for further analysis. Remediate PII issues. Most of these tools offer the possibility of analyzing the flow of data both passively and in real-time in order to put in "quarantine" all the sensitive information that does not comply with the appropriate security standards. Personal Information has morphed into a glaring security concern that requires vigilance from companies that rely on that data to service their clients. Having awareness of the PII laws and regulations that apply to your company are crucial. There are many tools available to identify sensitive information but what's usually lacking is the knowledge and desire from companies to protect this valuable information. With personal information becoming more valuable and cyber attacks increasing in the wake of the Pandemic, now more than ever organizations need to have solid strategies in place to deal with this growing threat.
<urn:uuid:1d5c6259-4efe-4ea6-a57b-349526c6ae88>
CC-MAIN-2022-40
https://www.datacenters.com/news/everything-you-need-to-know-about-pii-personal-identifiable-information
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00089.warc.gz
en
0.914952
2,002
3.140625
3
There are many good reasons to learn about cybersecurity. In today’s increasingly connected world, it’s more important than ever to know how data can be compromised and how to protect it. A cybersecurity course can teach you about the latest threats and how to defend against them. It can also give you the skills to identify systems vulnerabilities and develop secure solutions. With data breaches becoming no more common, and with the costs of such violations rising, it’s clear that cybersecurity is a critically important issue. By learning about cybersecurity, you can help to keep yourself and your loved ones safe from identity theft, financial loss, and other consequences of cybercrime. What Is Cyber Security? Cyber security is the use of technology to protect computers and networks from malicious attacks. A computer system is considered secure if its software and hardware are designed and implemented properly to prevent unauthorized access to information and if its configurations and network architecture are not compromised. Cyber Security Goal The goal of any cyber security strategy should be to reduce risks to acceptable levels while maintaining operational effectiveness. Reducing risk means identifying vulnerabilities (weaknesses) in systems and processes and implementing countermeasures to mitigate those weaknesses while ensuring that critical business operations are still able to function despite being subject to threats. Types Of Cyber Attacks A cybersecurity attack attempts to compromise the confidentiality, integrity, or availability of data residing in a computer system either directly via manipulation of the actual data or indirectly via manipulation of the physical structures involved in processing the data. In addition, cyber attackers may attempt to exploit vulnerabilities in computer programs, firmware, operating systems, databases, or other infrastructure components. Suppose you want to learn Cyber Security and stop cyber attacks. In that case, you have to take a look towards ethical hacking certification. Types Of Vulnerabilities Two types of vulnerabilities need to be addressed depending on the type of breach. Physical vulnerabilities occur at the device level and consist of attacks that target specific devices. The second type is conceptual vulnerabilities that exist in different layers of the computing stack. The top layer of the computing stack consists of code running in user space and kernel space. Below that, there is a system call interface, libraries, and lower-level drivers. Importance of Cyber Security Cyberattacks have become big business. Attackers are motivated by profit, and many businesses fail to understand how vulnerable they are to cyberattacks. In 2015 alone, attacks cost companies 445 billion dollars worldwide, according to RSA Security’s 2016 Global Economic Cost of Data Breach Study. More than half of these losses were due to financial fraud, and nearly half of all organizations experienced some kind of economic damage. As of 2017, cybercrime costs companies $400 billion annually, and 1 out of every ten online businesses experiences a significant data breach each year. The majority of those breaches are caused by weak password policies, outdated antivirus software, and poorly configured firewalls. Because of the sheer volume of attacks targeting organizations requires a well-coordinated, multi-faceted approach to combat them effectively. Need to join a Cyber Security course - Cyber Security is a field that is rapidly growing at a rate higher than any others in the world today. This is especially true in the United States, where cyber-crime is now considered to be the number one threat to our nation’s critical infrastructure. - Cyber security professionals need a solid understanding of cybersecurity concepts across the entire information technology lifecycle to effectively protect themselves and their organizations from cyber threats. - Cybersecurity courses provide comprehensive instruction that teaches students how to navigate both traditional and digital networks and systems. These skills prepare them to identify potential vulnerabilities, analyze system parameters, evaluate threats, develop solutions, and implement those solutions. - Students who complete a course in Cyber security not only gain access to valuable certifications, they may even qualify for greater job opportunities. 5 reasons to learn a cybersecurity course There are many reasons to learn a cybersecurity course. Here are five of the most important ones: - Cybersecurity is one of the most important issues facing businesses and organizations today. - A comprehensive understanding of cybersecurity can help protect your business or organization from cyber attacks. - Cybersecurity is a rapidly growing field, and businesses and organizations are increasingly looking for qualified personnel. - Learning about cybersecurity can help you develop important skills that are valuable in many other fields. - Cybersecurity is an exciting and challenging field that offers many opportunities for growth and development. There are many reasons to learn a cybersecurity course. As our world becomes increasingly digital, the need for cybersecurity experts grows. By learning a cybersecurity course, you can gain the skills and knowledge needed to protect businesses and individuals from online threats. A cybersecurity course can also help you advance your career. With the right skills, you can qualify for higher-paying jobs and positions of responsibility. Cybersecurity is a growing field, and there is a great demand for qualified professionals. If you’re looking to make a difference in the world of cybersecurity, learning a cybersecurity course is a great place to start. You’ll gain valuable skills and knowledge that you can use to make a difference in the fight against online threats. Cybersecurity is one of the most sought-after skills in today’s job market. But with the constant stream of news about cyberattacks, many people are left wondering if they need to learn how to defend themselves online. We hope this blog post has helped you understand why learning a cybersecurity course is necessary to stay safe online. Please contact us anytime in the comment if you have any other questions or concerns about learning a cybersecurity course. Thank you for reading; we are always excited when one of our posts can provide useful information on a topic like this!
<urn:uuid:fa7f3888-83fb-4ed4-a311-533afbd0b743>
CC-MAIN-2022-40
https://cybersguards.com/why-you-need-to-learn-a-cybersecurity-course/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00089.warc.gz
en
0.941658
1,170
3.359375
3
This article may reference legacy company names: Continental Mapping, GISinc, or TSG Solutions. These three companies merged in January 2021 to form a new geospatial leader [Axim Geospatial]. The fifth-generation wireless telecommunications technology known as 5G is coming, and it’s going to impact more than just your cell phone. 5G will serve as the data transmission backbone that will enable autonomous vehicles, smart cities, and the internet of things (IoT). From a technical and infrastructure standpoint, 5G is considerably different from the 4G LTE network in use now. The rollout of 5G is a massive undertaking with significant challenges. Geospatial technology can facilitate and solve some of the most vexing issues. In this blog series, we will examine these challenges and the solutions the geospatial industry can provide. First, let’s explore what 5G is and how it works so we can better understand the rollout challenges ahead. 5G is a wireless technology standard that promises to deliver significant improvements in the capacity and speed of data transmissions often referred to as low latency. The primary driver behind its implementation is IoT interconnectivity where millions of devices, sensors, infrastructure – and even autonomous vehicles – can communicate. This split-second flow of densely packed data will enable intelligent systems to make better-informed decisions in daily operations of smart communities. The 4G wireless network can’t handle this low-latency data throughput, but 5G can because it will operate in a vastly broader bandwidth of the radio-frequency (RF) spectrum with a much higher frequency. A technical hurdle for 5G is that RF signals in this spectrum are very short, and do not travel as far, which limits them to nearly line-of-sight usage. This means that cellular networks must be reconfigured into much smaller, tighter cell sites, placing many more access points called small cells. The engineering of these small cell networks is the first 5G challenge where geospatial technology can have an immediate positive impact. The design and construction will involve placing numerous access points and laying tens of thousands of miles of new fiber, both requiring detailed and up-to-date land cover and land-use maps for installation areas. Network designers need accurate maps that display terrain and infrastructure for wave propagation modeling to determine where these towers can be built to effectively relay high-frequency signals. In rural regions, antennas mounted to towers will relay signals, but in more developed areas, such as urban and suburban zones, they will be attached to buildings and other structures. The increased number of small cell sites will mean more care will be taken to disguise them into the urban landscape. Similar mapping is needed for placement of fiber. Fiber cable is often buried along existing transportation corridors, but much is hung on existing utility poles. Up-to-date maps derived from aerial and ground-based sensors inform designers of pole locations and determine where fiber cables can be buried, taking into account the most cost-effective routes, right of way, and land ownership details. Having accurate geospatial data makes 5G network design easier and faster. In the next blog, we will examine more specifically how 3D geospatial data is utilized to fulfill the needs of small cell engineering. Read more about the specifics of 5G here: https://www.sdxcentral.com/5g/definitions/what-is-5g/ We hope this article has provided some value to you! If you ever need additional help, don't hesitate to reach out to our team. Contact us today!
<urn:uuid:9631b0c6-a392-4133-aaaf-3f944ed4232f>
CC-MAIN-2022-40
https://www.aximgeo.com/blog/mapping-for-5g-rollout
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00089.warc.gz
en
0.936584
737
3.203125
3
While studying viruses best known for infecting the brain, researchers at Washington University School of Medicine in St. Louis stumbled upon clues to a conundrum involving a completely different part of the anatomy: the bowel, and why some people possibly develop digestive problems seemingly out of the blue. The researchers found that viruses such as West Nile and Zika that target the nervous system in the brain and spinal cord also can kill neurons in the guts of mice, disrupting bowel movement and causing intestinal blockages. Other viruses that infect neurons also may cause the same symptoms, the researchers said. The findings, published Oct. 4 in the journal Cell, potentially could explain why some people experience recurrent, unpredictable bouts of abdominal pain and constipation—and perhaps point to a new strategy for preventing such conditions. “There are a number of people who are otherwise healthy who suddenly develop bowel motility problems, and we don’t understand why,” said Thaddeus S. Stappenbeck, MD, Ph.D., the Conan Professor of Laboratory and Genomic Medicine and the study’s co-senior author. “But now we believe that one explanation could be that you can get a viral infection that results in your immune cells killing infected neurons in your gut. That might be why all of a sudden you can’t move things along any more.” Postdoctoral researcher and first author James White, Ph.D., was studying mice infected with West Nile virus, a mosquito-borne virus that causes inflammation in the brain, when he noticed something peculiar. The intestines of some of the infected mice were packed with waste higher up and empty farther down, as if they had a blockage. “We actually noticed this long ago, but we ignored it because it wasn’t the focus of our research at the time,” said West Nile expert Michael S. Diamond, MD, Ph.D., the Herbert S. Gasser Professor of Medicine and the paper’s co-senior author. “But Jim White dug in. He wanted to figure out why this was happening.” White, Diamond, Stappenbeck and colleagues including Robert Hueckeroth, MD, Ph.D., of the University of Pennsylvania, found that not only West Nile virus but its cousins Zika, Powassan and Kunjin viruses—all of which target the nervous system like West Nile—caused the intestines to expand and slowed down transit through the gut. In contrast, chikungunya virus, an unrelated virus that does not target neurons, failed to cause bowel dysfunction. Further investigation showed that West Nile virus, when injected into a mouse’s foot, travels through the bloodstream and infects neurons in the intestinal wall. These neurons coordinate muscle contractions to move waste smoothly through the gut. Once infected, the neurons attract the attention of immune cells, which attack the viruses—and kill the neurons in the process. “Any virus that has a propensity to target neurons could cause this kind of damage,” said Diamond, who is also a professor of molecular microbiology and of pathology and immunology. “West Nile and related viruses are not very common in the U.S. But there are many other viruses that are more widespread, such as enteroviruses and herpesviruses, that also may be able to target specific neurons in the wall of the intestine and injure them.” If that’s the case, such widespread viruses may provide a new target in the prevention or treatment of painful digestive issues. Having chronic gut motility problems is a miserable experience, and while the condition can be managed, it can’t be cured or prevented. “Many of the viruses that might target the gut nervous system cause mild, self-limiting infections, and there’s never been reason to develop a vaccine for them,” Diamond said. “But if you knew that some particular viruses were causing this serious and common problem, you might be more apt to try to develop a vaccine.” The infected mice’s digestive tracts gradually recovered over an eight-week time span. But when the researchers challenged the mice with an unrelated virus or an immune stimulant, the bowel problems promptly returned. This pattern echoed the one seen in people, who cycle through bouts of gastrointestinal distress and recovery. The flare-ups often are triggered by stress or illness, but they also can occur for no apparent reason. “It’s amazing that the nervous system of the gut is able to recover and re-establish near normal motility, even after taking a pretty big hit and losing a lot of cells,” said Stappenbeck, who is also a professor of developmental biology. “But then, it’s really just barely functioning normally, and when you add any stress, it malfunctions again.” Previous studies have linked bowel motility to changes in the microbiome—the community of bacteria, viruses and fungi that live in the gut. “What we need to explore now is how this story connects to everything else we know about gut motility,” Stappenbeck said. “What effect does damage to the gut nervous system have on the microbiome? We would love to connect those dots.” Journal reference: Cell Provided by: Washington University School of Medicine
<urn:uuid:5b47cf0c-8ace-4026-b6a9-35d894d086c4>
CC-MAIN-2022-40
https://debuglies.com/2018/10/05/viruses-in-blood-can-lead-to-digestive-problems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00289.warc.gz
en
0.938427
1,120
3.265625
3
Strategies for prevention, preparedness and response to disasters and the recovery of essential post-disaster services. Disaster Recovery (DR) The process, policies and procedures related to preparing for recovery or continuation of technology infrastructure, systems and applications which are vital to an organization after a disaster or outage. The strategies and plans for recovering and restoring the organizations technological infra-structure and capabilities after a serious interruption. Disaster Recovery Plan (DRP) The management approved document that defines the resources, actions, tasks and data required to manage the technology recovery effort. Disaster Recovery Planning The process of developing and maintaining recovery strategies for information technology (IT) systems, applications and data. This includes networks, servers, desktops, laptops, wireless devices, data and connectivity. Data replication and recovery technique where data is duplicated on a separate disk subsystem preferably separate location, in real time or near real time, to ensure continuous availability of critical information. Data is protected in transit through encryption. An event that interrupts normal business, functions, operations, or processes, whether anticipated (e.g., hurricane, political unrest) or unanticipated (e.g., a blackout, terror attack, technology failure, or earthquake). The routing of information through split or duplicated cable facilities. A continuity and recovery strategy requiring the live undertaking of activities at two or more geographically dispersed locations. A period in time when something is not in operation. A strategy for: a) Delivering equipment, supplies, and materials at the time of a business continuity event or exercise. b) Providing replacement hardware within a specified time period via prearranged contractual arrangements with an equipment supplier at the time of a business continuity event.
<urn:uuid:0f645d5a-d86c-49ed-94d7-33484a23baaf>
CC-MAIN-2022-40
https://drj.com/bc_glossary_index/d/page/3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00289.warc.gz
en
0.874907
381
3.015625
3
Investigating an incident often requires viewing footage from several different cameras and angles. Dates and timestamps may not always match up, making it difficult to track an object of interest across multiple cameras. Dragonfruit’s Timelines are a powerful tool to see how an incident transpires from different viewpoints and to assist in investigations with video from many sources. This simplifies the tracking of an incident across multiple cameras, helping build a more holistic picture of the incident. Timelines make it easy to change the scope (granularity) and span (length) of viewable video, and also make it easy to align videos by fixing incorrect timecodes in incoming videos. Scope is granularity of a video, achieved by zooming in on the video timeline. Span is length of video, viewed by zooming out of the timeline.
<urn:uuid:d7c5920d-a054-43d4-ac5a-e1923d97eeae>
CC-MAIN-2022-40
https://www.dragonfruit.ai/module/timeline-view
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00289.warc.gz
en
0.931506
167
2.53125
3
An intelligence revolution is underway, and at its very forefront is the IoT. You might be familiar with the IoT, or the Internet of Things, in terms of connected watches, home appliances, smart TVs, and more. The IoT has made a debut in the transportation sector and is predicted to completely change the way we drive in the next few years. Let’s discuss some of the changes we’ve already seen, and some we can expect to see in the realm of connected transportation. Popular culture is full of futuristic self-driving cars – think Batman’s Batmobile and Knight Rider’s KITT. Turns out, this future is slowly becoming the present. Today, an increasing number of manufacturers are unveiling their self-driving car models in the public sphere. For instance Tesla’s “Autopilot” software allows Tesla cars to stay within a particular lane, change lanes at the tap of a button and avoid hazards around the car. Later this year, Nissan is set to launch its Qasahi model—a car with semi-autonomous drive technologies. Much like Tesla, this car will be able to take over from the driver in certain conditions. For example, in heavy traffic conditions the car will manage braking, steering, and acceleration. Deeper Learning: Teaching Cars How to Drive The next big thing for IoT in transportation is teaching cars how to drive in the same way humans do. Unlike Tesla’s Autopilot software mentioned above, this involves a deeper learning that makes cars fully autonomous and driverless. According to BI Intelligence, steps one, two, and three regarding driverless vehicles are already well underway. These include the ability for cars to gauge and drive in traffic jams, the ability to summon a car in a small space (such as a garage) through a smart device, and the ability to change lanes on the highway with blind spot technology included. BI Intelligence explains step four as cars that need drivers behind the wheel, but with the option to press a button that allows the car to completely drive itself. Finally, step five, and the most exciting of them all will be a totally driverless vehicle that does not require a driver or even steering wheel! These are expected to hit the market after 2020. High price and safety concerns associated with driverless cars are the biggest obstacles preventing their widespread use. Tyler Duvall, a partner at analyst firm McKinsey & Company, argued that it is unlikely that this new technology will hit the roads until government regulations to handle unfortunate events are in place to deal with the same. For example, questions such as who would be responsible in the case of a collision—passenger or driverless car—still need to be figured out. Questions like these and others, especially following Tesla’s infamous fatal failure of its autopilot feature are causes for concern. Traffic and Road Condition Monitoring in Cities Sensors are integral to any kind of smart technology that involves transportation. Self-driving cars wouldn’t run without sensors, as these are what allows the car to monitor and track a vehicle’s position in relation to other cars. Similarly, sensors that are embedded into roads, streetlights, and traffic signs can collect and transmit a huge amount of data that would otherwise be inaccessible in real time. For instance, sensors can manage traffic flow by adjusting the length of traffic signals. Certain cities have started using sensors to track public transportation like buses as they move from one stop to another. In cities like Las Vegas and Michigan, sensors within street lights monitor the air quality and changes in road conditions. Traffic signals with built-in sensors can reduce congestion by as much as 20 percent, according a report by McKinsey. In this way, sensors are extremely useful in tracking and analyzing road and traffic conditions. In congested cities, these would be a boon to install, but unfortunately sensors are quite an expense and can cost up to $100,000 per signal to implement. Chicago has undertaken Array of Things (AoT), which involves an undertaking of 500 citywide nodes. Brenna Berman, commissioner and CIO of the Chicago Department of Innovation and Technology, states that, “Publishing the data to the city’s open-data portal will occur in almost real time. Any department in the city will tell you that more precise data will lead to more precise planning.” Simply put, a connected car is that which has direct access to the internet and is able to link with other connected items such as smartphones, tablets, and even traffic lights and tracking devices. According to research firm Gartner, more than a quarter billion cars will have some sort of wireless connectivity by 2020. Connected cars will make for a fully immersive and easy driving experience, with access to a plethora of apps that focus on safety, entertainment, and well-being. Today, apps like GasBuddy enable drivers to find the cheapest gas station near their current location, and app integration such as Google Maps, and Spotify linkups within a car are becoming commonplace. Being able to connect to traffic lights and tracking devices will enable cars to avoid traffic, pick the best route, and avoid hazards and accident-prone areas. Mercedes Benz is one of the manufacturers connecting their cars to consumers via smartphones. Features include getting geofencing notifications if the cars leave a defined area (parking space, home garage) as well as a remote parking feature for cars to park themselves via smartphone control. Connected cars aim to provide an increasing number of such services to make driving a safer and immensely convenient experience. In the next few years, consumers can expect to see a major change in the way we drive. IoT has set the stage to revolutionize our daily commute, and the future promises to show some exciting technology advancements in the transportation industry. Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.
<urn:uuid:7186126d-ae46-41c3-9088-5c9aedda8a2a>
CC-MAIN-2022-40
https://convergetechmedia.com/connected-transportation-how-technology-will-change-how-we-drive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00289.warc.gz
en
0.958019
1,385
3.078125
3
For many in the cybersecurity community, ‘Ghost in the Shell’, both in its source material and recent film adaptation, is an inventive representation of where the sector is heading. We still have a way to go, but the foundations are in place for the melding of human and machine. Today, at least, this is not meant physically but rather in the operational sense. Artificial intelligence (AI) has become so important for the industry that this relationship could already be described as symbiotic. As the number and complexity of threats and attacks increases, organisations are increasingly looking to AI and machine learning to transform their security posture. Evolution and revolution – AI in cybersecurity AI can be thought of as applying data science and machine learning to a specific series of problems. When it looks advanced enough, we tend to call machine learning AI but should not mistake it for the level of general intelligence shown by Skynet or HAL 9000 in the movies. Simply put, machine learning allows non-human systems, such as software, to interpret and adapt to change, usually based on the information made available to the system. It is not necessarily a new technology, but one that has evolved over time and enjoyed significant advances in recent years. Indeed, advances in machine learning have pushed forward the capabilities of cyber security software. This is how we have gone from using the tech to uncover credit card fraud through spotting anomalies in datasets, to using it to actively hunt networks for the most subtle signs of ransomware, malware, and advanced targeted attacks. By absorbing, analysing and contextualising data relating to activity like network traffic, machines can better distinguish between benign and malicious activity. However, the future is even more exciting. Increasingly, cyberattacker behaviours are being targeted with a variety of machine learning techniques. These include ensemble learning, when multiple machine learning algorithms are stacked together for better predictive performance, and deep learning, where multi-layered artificial neural networks are set up to tackle large or complex data sets. These are expanding the ability to detect subtler attack patterns without leaning on human analysts for verification. While AI that displays flawless Turing Test and cognitive intelligence is still years away, the ability to detect ‘low and slow’ attacks through moderate-scope AI is just around the corner. Augmenting the human Companies have limited time, human and technical resources to protect their ever-expanding attack surfaces. Global cyber skills demand is set to exceed supply by a third before 2019, and there is already a glut of unfilled entry-level security jobs in major economies. With widespread skills shortages and high turnover, human cybersecurity teams are simply unable to process the sheer weight of the workload by themselves. In this context, AI is reducing the number of personnel required to operate detection systems. An AI does not tire – machine learning can process large data sets at speed and do what humans would never have the time or patience to do. Automation is the rational response to a rapidly diversifying and expanding threat landscape, especially as the availability of human cyber skills and resources is so constrained. However, the intention is not to replace the human component but to augment it. Unlike machines, humans have critical knowledge of the messy context surrounding modern business. They know, for example, when an unusually large data transfer is part of a legitimate deal and when it is something to be worried about. What automating data collection, threat detection and operational assignments does is free staff to spend more time on higher priority tasks. Machines do the legwork while humans make the system useable and worthwhile. The combination of these skillsets – tireless data processing and human-supplied context – yields the most effective defence. An eye to the future Today, the role of AI in cybersecurity is to ingest sensor data and provide automated intelligence. This in turn helps humans to react faster and smarter than they could otherwise. Yet, with the recent unveiling of Elon Musk’s ambitions for ‘neural lace’ brain implants, I can see this process expanding. In the near future, sensors that wirelessly communicate with an implant in a human analyst’s brain are certainly on the cards. A human should, of course, always make the final decision. Yet, by combining human judgement with machine analytics and response automation, decisions and responses can more informed and made almost instantaneously. These aspirations may still be in their infancy, but make no mistake: the baby has been born and it has started to grow and learn. Ghost in the Shell’s ‘Major’ character is a depiction of this relationship running smoothly. She manages to use the best of both humans and machines, operating faster with the benefit of a global field of intelligence and the ability to learn, whilst retaining the ability to apply human context to a situation. For the foreseeable future, humans and AI must work together to thwart most cyber attackers. In movie terms, you need a ‘Major’ or a Robocop rather than the Terminator – or you need a Terminator with human handlers. However, while Ghost in the Shell takes place in 2029, we will likely arrive there sooner. AI cyber threat hunters are already patrolling advanced organisations. While not physically embodied like Major, the melding of AI capabilities with human intelligence is happening right now. Given another 12 years, a body-and-mind collaboration will no longer be science fiction.
<urn:uuid:f56906d1-5f42-46e4-896b-de3d68bf837a>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/04/13/ai-future-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00289.warc.gz
en
0.94795
1,096
2.6875
3
What is a firewall? Firewall is a software or hardware to protect private network from public network. If hacker make activity to scan network, these methods are discarded by Firewall. so this is most important for hackers and Pentester to scan the network without being caught. If you can bypass firewall then you are safe. In this tutorial you will learn how to bypass and test firewall. Best nmap options to bypass firewall During penetration testing, you may encounter a system that is using firewall and IDS to protect the system. If you just use the default settings, your action may get detected or you may not get the correct result from Nmap. The following options may be used to help you evade the firewall/IDS: • -f (fragment packets): This purpose of this option is to make it harder to detect the packets. By specifying this option once, Nmap will split the packet into 8 bytes or less after the IP header. With this option, you can specify your own packet size fragmentation. The Maximum Transmission Unit (MTU) must be a multiple of eight or Nmap will give an error and exit. • -D (decoy): By using this option, Nmap will send some of the probes from the spoofed IP addresses specified by the user. The idea is to mask the true IP address of the user in the logfiles. The user IP address is still in the logs. You can use RND to generate a random IP address or RND:number to generate the <number> IP address. The hosts you use for decoys should be up, or you will flood the target. Also remember that by using many decoys you can cause network congestion, so you may want to avoid that especially if you are scanning your client network. • –source-port <portnumber> or –g (spoof source port): This option will be useful if the firewall is set up to allow all incoming traffic that comes from a specific port. This option is used to change the default data length sent by Nmap in order to avoid being detected as Nmap scans. This option is usually set to one in order to instruct Nmap to send no more than one probe at a time to the target host. • –scan-delay <time>: This option can be used to evade IDS/IPS that uses a threshold to detect port scanning activity. You may also experiment with other Nmap options for evasion as explained in the Nmap manual (http://nmap.org/book/man-bypass-firewalls-ids.html). MODULE 5:- Scanning Network and Vulnerability - Introduction of port Scanning – Penetration testing - TCP IP header flags list - Examples of Network Scanning for Live Host by Kali Linux - important nmap commands in Kali Linux with Example - Techniques of Nmap port scanner – Scanning - Nmap Timing Templates – You should know - Nmap options for Firewall IDS evasion in Kali Linux - commands to save Nmap output to file - Nmap Scripts in Kali Linux - 10 best open port checker Or Scanner - 10 hping3 examples for scanning network in Kali Linux - How to Install Nessus on Kali Linux 2.0 step by step - Nessus scan policies and report Tutorial for beginner - Nessus Vulnerability Scanner Tutorial For beginner
<urn:uuid:3eb9c3ee-f109-46b3-8bc9-9773548086d9>
CC-MAIN-2022-40
https://www.cyberpratibha.com/nmap-options-for-firewall-ids-evasion-in-kali-linux/?amp=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00289.warc.gz
en
0.846951
730
2.84375
3
Developer Xudong Zheng created a web page to educate people about a new security threat online. The danger was rooted in the weaknesses of several browsers, including Chrome, Firefox, and Opera. He found that web addresses could have one appearance while being registered as something completely different. To show the potential danger, he set up a dummy site. https://www.xn--80ak6aa92e.com/ is a URL appearing in several browsers as www.apple.com. How did he do it? Keep reading to find out. Back in the early 90’s, only ASCII characters were used in domain names. ASCII, short for American Standard Code for Information Interchange, assigns a number code to each standard English character. If you were working in certain industries or in college school during the 80’s and 90’s, you may remember using it to share document texts between computers and countries.
<urn:uuid:0918762d-7097-4003-b257-493984d56fa9>
CC-MAIN-2022-40
https://davidpapp.com/tag/data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00489.warc.gz
en
0.961365
190
3.78125
4
Google has announced that its Google Earth software has surpassed one billion downloads. The tool, which achieved the download milestone six years after its launch, is a virtual atlas that allows users to fly to any location on Earth and use it for mapping, educational or recreational purposes. Emphasising on how huge one billion is, Google blogged (opens in new tab) that one billion hours ago man was living in the stone age and one billion minutes ago the Roman empire was at its peak. “Today, we’ve reached our own one billion mark: Google Earth has been downloaded more than one billion times since it was first introduced in 2005. That’s more than one billion downloads of the Google Earth desktop client, mobile apps and the Google Earth plug-in—all enabling you to to explore the world in seconds, from Earth to Mars to the ocean floor,” Google stated. The one billion download mark for Google Earth includes downloads for Google Earth desktop client, mobile apps and the Google Earth plug-in. Google Earth is based on software developed by Keyhole, which was later acquired by Google in 2004. The company has also launched a new website (opens in new tab) called One World Many Stories, to allow people to share their experiences with Google Earth.
<urn:uuid:4dbdfffc-e0b9-4919-8b66-002bde957506>
CC-MAIN-2022-40
https://www.itproportal.com/2011/10/06/google-earth-mapping-tool-surpasses-one-billion-downloads/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00489.warc.gz
en
0.959787
264
2.609375
3
The healthcare sector continues to work on improving treatment and care for patients while introducing more technology to keep up. All of those changes don’t come without certain risks and threats. The healthcare industry maintains being the main target for cyber attacks. The COVID-19 pandemic placed an enormous strain on the healthcare sector’s entire infrastructure, including the online one. Why are healthcare practices targeted by cybercriminals? The volume and type of confidential data makes healthcare practices a prime target for cyber criminals, with healthcare being the most targeted industry, making up a third of all US data breaches. Patient records with medical details, financial information, patents and clinical trials, all of this data can bring a lot of money to cyber criminals. Medical information and research data has a high price on the black market — up to $1000 per record. Compare that to credit card details that are only around $100. Great financial gain for the attackers, but equally as great is the financial loss to affected healthcare practices. A stolen record costs healthcare practices an average of $429 per record, making it the industry with costliest data breaches. All healthcare organizations are at risk from cyber attacks — whether you are a large hospital or a small practice. In the US, cyber criminals find the most success attacking mid-sized and smaller healthcare organizations. While large organizations have a larger volume of enticing data, smaller healthcare practices often have weaker cyber defenses, making them an easy backdoor to a larger target. However, due to the heavy penalties incurred by HIPAA violations, most healthcare practices are expanding their security efforts in order to remain compliant. Safeguarding patient and client sensitive data is at the core of HIPAA and while it’s important to be compliant, further cybersecurity measures and practices should be followed. Biggest Cybersecurity Threats for Healthcare Practices Being in charge of such sensitive patient data, effective cybersecurity best practices become an imperative for healthcare businesses of all sizes. Cyber threats can cause disruption of operations, leading to potentially lethal situations or lead to theft and corruption of confidential data. Being knowledgeable on the tactics and methods cyber criminals use is the crucial first step for keeping the integrity of your systems and privacy of your patients. It’s impossible to ignore the devastating ransomware attacks that have targeted the healthcare sector during 2020. In that year alone, 560 healthcare providers fell victim to a ransomware attack with the one on Universal Health Services being the most significant. More than 400 hospitals and care facilities across the US were impacted by the attack, with ambulances being rerouted, radiation treatments for cancer patients delayed and many medical records permanently lost. A ransomware attack typically starts with a phishing email appearing to be from a legitimate source. It will contain a malicious attachment or a link that the recipient is enticed to click on. After the user clicks on the link or downloads the attachment, it can plant malware into their system or lead to a website that appears legitimate and where they would input their credentials. Attackers can then gain access to that user’s account, and from there, spread across the entire network. Once they have access, they will encrypt and block access to data or the entire system until a ransom is paid. Decryption or paying the ransom often remain as the only options. Healthcare practices impacted by ransomware can suffer to operate properly like we’ve seen in the example above. Additionally, permanent loss of data can cause HIPAA violations and other financial losses. Business email compromise Email has always been a prevalent way for cyber criminals to launch attacks, with healthcare practices being one of the main targets. Business email compromise (BEC) involves scammers tricking recipients into sharing confidential and financial information. They achieve this by pretending to be someone from the inside or close to the organization. Attackers can gain access to an employee’s email account via, for example, phishing and then mimic the employee to communicate with others in the organization. They can also impersonate an external partner and set up lookalike emails to solicit information. In 2019, VillageCare Rehabilitation and Nursing Center in New York received an email that appeared to come from a senior level employee requesting patient information. The recipient trusted this was a legitimate email and the attacker was able to obtain information on 674 patients. BEC attacks are so successful because they are highly targeted — attackers conduct plenty of research prior to their attacks. They will discover how to bypass any email protection measures and impersonate the targeted individual quite effectively. Internal Threats to Healthcare While many attacks are caused by outside threats, healthcare practices shouldn’t neglect watching out what happens on the inside. Former employees wanting to sell confidential data for profit, working with cyber criminals to exfiltrate data, or even just a careless employee clicking on a wrong link — insider threats are highly probable and equality as dangerous. What makes insider threats so dangerous is the fact that the threat actor is someone who already has access to the target’s systems, circumventing any cybersecurity defenses. The healthcare sector is particularly vulnerable. Insider threats were topping the lists as healthcare’s biggest cybersecurity risk in previous years. Even if it’s a simple case of a negligent employee, all it takes is one careless email incident to lead to a serious HIPAA breach. While most other threats are concerned with stealing confidential patient and financial data, distributed denial-of service attacks — DDoS, can lead to inoperability of the entire network. For healthcare practices this can mean inability to conduct online communication, provide patient care, issue prescriptions and other critical actions, to the point of disrupting the entire practice. In a nutshell, a DDoS attack involves threat actors coordinating an attack to flood a healthcare network with so much traffic that it renders it inoperable. DDoS attacks are difficult to catch as they can resemble other technical problems and are generally hard to protect against, no matter how strong your cybersecurity defenses are. In March of 2020, the U.S. Health and Human Services (HHS) suffered a DDoS attack that was orchestrated to interrupt COVID-19 pandemic response. But this doesn’t mean that smaller healthcare organizations and practices are not at risk of DDoS attacks — smaller networks are much easier to hit and overwhelm. Your practice doesn’t even have to be strictly targeted either — opportunistic DDoS attacks aren’t that rare. Cybersecurity Best Practices for Healthcare While healthcare practices are under a myriad of cybersecurity threats, there are still best practices to follow to ensure they are not only HIPAA compliant but are also equipped to handle any incidents that come their way. Enforce A Strong Password Policy In Your Practice We’ve seen that many cybersecurity threats to healthcare practices come in the form of emails and similar ways in which attackers obtain access to employees’ accounts. In order to mitigate this threat and stop attackers from spreading further through the network, having a proper password policy is a must. A strong password policy will entail high complexity of passwords, each account having a unique password and a periodic change of all passwords. Add a multi-factor authentication to each account in a form of an SMS code or similar, and all accounts on your network will be efficiently protected. Always Use An Anti-virus Solution In the case of an employee actually clicking on a malicious attachment and downloading malware onto their computer, it’s important to have an anti-virus solution installed on the network. This way, the virus or malware will be detected, quarantined and stopped from spreading further. While anti-virus and anti-malware is commonly used, healthcare providers should focus on deploying advanced software to keep up with all malware strains out there. Additional protection is offered at the network level. You can place a firewall with anti-virus software on your network to further protect your sensitive data. In ransomware attacks, your data is locked up and held under ransom. In that scenario data is usually corrupted or stolen. Regular backups need to be a routine practice for healthcare practices and should be enforced from the get-go. This is where it’s important to think proactively and adopt the mindset of “it’s not a question if an attack will occur, but when will it occur”. Special attention needs to be brought to sensitive and confidential data such as medical records, financial information and patents so they can be easily restored if an attack does happen. Consider offsite backups and ensure strong data encryption. Frequent updates and patch management All of this software and solutions are great cybersecurity defenses but in order for them to effectively protect your system, they need to be updated regularly. Certain vulnerabilities and security holes that can be exploited by cyber criminals can pop-up in popular software. While vendors are diligent in providing patches for them, it’s up to you to install them. Having a patch management strategy in place for making sure all software, OS and applications will make it easy to stay up-to-date and not have your healthcare practice vulnerable to cybersecurity threats. Maintain security awareness Last, but not least, is the human element of cybersecurity. A lot of the cybersecurity threats we talked about today involved an employee as a catalyst behind an attack. Your employees are the first line of defense but can also be your biggest weakness when it comes to cybersecurity. Working in healthcare is a highly rewarding but strenuous career and your employees might not have the time or resources to dedicate to cybersecurity awareness video presentations and courses. Culture of awareness on cybersecurity is created and nurtured through ongoing, engaging and clear training. Enhanced physical security measures can also help secure your healthcare practice. Installing proper access controls in your practice and putting up security cameras offer and extra level of deterrence and protection. Every employee should have an understanding of proper security procedures and practices to follow which will ensure your first line of defense is a substantial one. Proactive Security Is The Best Security When it comes to healthcare practices, much is on the line if being a victim of a cyber attack. Making smart cybersecurity decisions and following the best practices that go beyond what is dictated by HIPAA and similar regulations, can go far in making sure your patients’ data and integrity of your practice is preserved. Predictions so far each year aren’t showing any signs of cyber attacks slowing down, and the best we can do is to be prepared.
<urn:uuid:d1e37a59-f0e1-43ae-9854-4af3c218d623>
CC-MAIN-2022-40
https://myfastech.com/top-cybersecurity-threats-for-healthcare-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00489.warc.gz
en
0.945969
2,139
2.609375
3
In this post, we’re going to explain Kubernetes, also known as K8s. In this introduction, we’ll cover: - What Kubernetes is - What it can’t do - The problems it solves - K8s architectural components - Alternative options This article assumes you are new to Kubernetes and want to get a solid understanding of its concepts and building blocks. (This article is part of our Kubernetes Guide. Use the right-hand menu to navigate.) What is Kubernetes? To begin to understand the usefulness of Kubernetes, we have to first understand two concepts: immutable infrastructure and containers. - Immutable infrastructure is a practice where servers, once deployed, are never modified. If something needs to be changed, you never do so directly on the server. Instead, you’ll build a new server from a base image, that have all your needed changes baked in. This way we can simply replace the old server with the new one without any additional modification. - Containers offer a way to package code, runtime, system tools, system libraries, and configs altogether. This shipment is a lightweight, standalone executable. This way, your application will behave the same every time no matter where it runs (e.g, Ubuntu, Windows, etc.). Containerization is not a new concept, but it has gained immense popularity with the rise of microservices and Docker. Armed with those concepts, we can now define Kubernetes as a container or microservice platform that orchestrates computing, networking, and storage infrastructure workloads. Because it doesn’t limit the types of apps you can deploy (any language works), Kubernetes extends how we scale containerized applications so that we can enjoy all the benefits of a truly immutable infrastructure. The general rule of thumb for K8S: if your app fits in a container, Kubernetes will deploy it. By the way, if you’re wondering where the name “Kubernetes” came from, it is a Greek word, meaning helmsman or pilot. The abbreviation K8s is derived by replacing the eight letters of “ubernete” with the digit 8. The Kubernetes Project was open-sourced by Google in 2014 after using it to run production workloads at scale for more than a decade. Kubernetes provides the ability to run dynamically scaling, containerised applications, and utilising an API for management. Kubernetes is a vendor-agnostic container management tool, minifying cloud computing costs whilst simplifying the running of resilient and scalable applications. Kubernetes has become the standard for running containerised applications in the cloud, with the main Cloud Providers (AWS, Azure, GCE, IBM and Oracle) now offering managed Kubernetes services. Kubernetes basic terms and definitions To begin understanding how to use K8S, we must understand the objects in the API. Basic K8S objects and several higher-level abstractions are known as controllers. These are the building block of your application lifecycle. Basic objects include: - Pod. A group of one or more containers. - Service. An abstraction that defines a logical set of pods as well as the policy for accessing them. - Volume. An abstraction that lets us persist data. (This is necessary because containers are ephemeral—meaning data is deleted when the container is deleted.) - Namespace. A segment of the cluster dedicated to a certain purpose, for example a certain project or team of devs. Controllers, or higher-level abstractions, include: - ReplicaSet (RS). Ensures the desired amount of pod is what’s running. - Deployment. Offers declarative updates for pods an RS. - StatefulSet. A workload API object that manages stateful applications, such as databases. - DaemonSet. Ensures that all or some worker nodes run a copy of a pod. This is useful for daemon applications like Fluentd. - Job. Creates one or more pods, runs a certain task(s) to completion, then deletes the pod(s). A specific part of a previously monolithic application. A traditional micro-service based architecture would have multiple services making up one, or more, end products. Micro services are typically shared between applications and makes the task of Continuous Integration and Continuous Delivery easier to manage. Explore the difference between monolithic and microservices architecture. Typically a docker container image – an executable image containing everything you need to run your application; application code, libraries, a runtime, environment variables and configuration files. At runtime, a container image becomes a container which runs everything that is packaged into that image. A single or group of containers that share storage and network with a Kubernetes configuration, telling those containers how to behave. Pods share IP and port address space and can communicate with each other over localhost networking. Each pod is assigned an IP address on which it can be accessed by other pods within a cluster. Applications within a pod have access to shared volumes – helpful for when you need data to persist beyond the lifetime of a pod. Learn more about Kubernetes Pods. Namespaces are a way to create multiple virtual Kubernetes clusters within a single cluster. Namespaces are normally used for wide scale deployments where there are many users, teams and projects. A Kubernetes replica set ensures that the specified number of pods in a replica set are running at all times. If one pod dies or crashes, the replica set configuration will ensure a new one is created in its place. You would normally use a Deployment to manage this in place of a Replica Set. Learn more about Kubernetes ReplicaSets. A way to define the desired state of pods or a replica set. Deployments are used to define HA policies to your containers by defining policies around how many of each container must be running at any one time. Coupling of a set of pods to a policy by which to access them. Services are used to expose containerised applications to origins from outside the cluster. Learn more about Kubernetes Services. A (normally) Virtual host(s) on which containers/pods are run. Kubernetes architecture and components A K8S cluster is made of a master node, which exposes the API, schedules deployments, and generally manages the cluster. Multiple worker nodes can be responsible for container runtime, like Docker or rkt, along with an agent that communicates with the master. These master components comprise a master node: - Kube-apiserver. Exposes the API. - Etcd. Key value stores all cluster data. (Can be run on the same server as a master node or on a dedicated cluster.) - Kube-scheduler. Schedules new pods on worker nodes. - Kube-controller-manager. Runs the controllers. - Cloud-controller-manager. Talks to cloud providers. - Kubelet. Agent that ensures containers in a pod are running. - Kube-proxy. Keeps network rules and perform forwarding. - Container runtime. Runs containers. What benefits does Kubernetes offer? Out of the box, K8S provides several key features that allow us to run immutable infrastructure. Containers can be killed, replaced, and self-heal automatically, and the new container gets access to those support volumes, secrets, configurations, etc., that make it function. These key K8S features make your containerized application scale efficiently: - Horizontal scaling.Scale your application as needed from command line or UI. - Automated rollouts and rollbacks.Roll out changes that monitor the health of your application—ensuring all instances don’t fail or go down simultaneously. If something goes wrong, K8S automatically rolls back the change. - Service discovery and load balancing.Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing. - Storage orchestration.Automatically mount local or public cloud or a network storage. - Secret and configuration management.Create and update secrets and configs without rebuilding your image. - Self-healing.The platform heals many problems: restarting failed containers, replacing and rescheduling containers as nodes die, killing containers that don’t respond to your user-defined health check, and waiting to advertise containers to clients until they’re ready. - Batch execution.Manage your batch and Continuous Integration workloads and replace failed containers. - Automatic binpacking.Automatically schedules containers based on resource requirements and other constraints. What won’t Kubernetes do? Kubernetes can do a lot of cool, useful things. But it’s just as important to consider what Kubernetes isn’t capable of: - It does not replace tools like Jenkins—so it will not build your application for you. - It is not middleware—so it will not perform tasks that a middleware performs, such as message bus or caching, to name a few. - It does not care which logging solution is used. Have your app log to stdout, then you can collect the logs with whatever you want. - It does not care about your config language (e.g., JSON). K8s is not opinionated with these things simply to allow us to build our app the way we want, expose any type of information and collect that information however we want. Of course, Kubernetes isn’t the only tool on the market. There are a variety, including: - Docker Compose—good for staging but not production-ready. - Nomad—allows for cluster management and scheduling but it does not solve secret and config management, service discover, and monitoring needs. - Titus—Netflix’s open-source orchestration platform doesn’t have enough people using it in production. Overall, Kubernetes offers the best out-of-the-box features along with countless third-party add-ons to easily extend its functionality. Getting Started with Kubernetes Typically, you would install Kubernetes on either on premise hardware or one of the major cloud providers. Many cloud providers and third parties are now offering Managed Kubernetes services however, for a testing/learning experience this is both costly and not required. The easiest and quickest way to get started with Kubernetes in an isolated development/test environment is minikube. How to install Kubernetes - Kubectl is a CLI tool that makes it possible to interact with the cluster. - Minikube is a binary that deploys a cluster locally on your development machine. With these, you can start deploying your containerized apps to a cluster locally within just a few minutes. For a production-grade cluster that is highly available, you can use tools such as: Minikube allows you to run a single-node cluster inside a Virtual Machine (typically running inside VirtaulBox). Follow the official Kubernetes documentation to install minikube on your machine. https://kubernetes.io/docs/setup/minikube/. With minikube installed you are now ready to run a virtualised single-node cluster on your local machine. You can start your minikube cluster with; $ minikube start Interacting with Kubernetes clusters is mostly done via the kubectl CLI or the Kubernetes Dashboard. The kubectl CLI also supports bash autocompletion which saves a lot of typing (and memory). Install the kubectl CLI on your machine by using the official installation instructions https://kubernetes.io/docs/tasks/tools/install-kubectl/. To interact with your Kubernetes clusters you will need to set your kubectl CLI context. A Kubernetes context is a group of access parameters that defines which users have access to namespaces within a cluster. When starting minikube the context is automatically switched to minikube by default. There are a number of kubectl CLI commands used to define which Kubernetes cluster the commands execute against. $ kubectl config get-context $ kubectl config set-context <context-name> $ Kubectl config delete-context <context-name> Deploying your first containerised application to Minikube So far you should have a local single-node Kubernetes cluster running on your local machine. The rest of this tutorial is going to outline the steps required to deploy a simple Hello World containerised application, inside a pod, with an exposed endpoint on the minikube node IP address. Create the Kubernetes deployment with; $ kubectl run hello-minikube --image=k8s.gcr.io/ We can see that our deployment was successful so we can view the deployment with; $ kubectl get deployments Our deployment should have created a Kubernetes Pod. We can view the pods running in our cluster with; $ kubectl get pods Before we can hit our Hello World application with a HTTP request from an origin from outside our cluster (i.e. our development machine) we need to expose the pod as a Kubernetes service. By default, pods are only accessible on their internal IP address which has no access from outside the cluster. $ kubectl expose deployment hello-minikube -- Exposing a deployment creates a Kubernetes service. We can view the service with: $ kubectl get services When using a cloud provider you would normally set —type=loadbalancer to allocate the service with either a private or public IP address outside of the ClusterIP range. minikube doesn’t support load balancers, being a local development/testing environment and therefore —type=NodePort uses the minikube host IP for the service endpoint. To find out the URL used to access your containerised application type; $ minikube service hello-minikube -—url Curl the response from your terminal to test that our exposed service is reaching our pod. $ curl http://<minikube-ip>:<port> Now we have made a HTTP request to our pod via the Kubernetes service, we can confirm that everything is working as expected. Checking the the pod logs we should see our HTTP request. $ kubectl logs hello-minikube-c8b6b4fdc-sz67z To conclude, we are now running a simple containerised application inside a single-node Kubernetes cluster, with an exposed endpoint via a Kubernetes service. Minikube for learning Minikube is great for getting to grips with Kubernetes and learning the concepts of container orchestration at scale, but you wouldn’t want to run your production workloads from your local machine. Following the above you should now have a functioning Kubernetes pod, service and deployment running a simple Hello World application. From here, if you are looking to start using Kubernetes for your containerized applications, you would be best positioned looking into building a Kubernetes Cluster or comparing the many Managed Kubernetes offerings from the popular cloud providers. For more on Kubernetes, explore these resources:
<urn:uuid:80df8b48-524c-40b6-9b4b-23cc837eab50>
CC-MAIN-2022-40
https://www.bmc.com/blogs/what-is-kubernetes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00489.warc.gz
en
0.872817
3,354
3.03125
3
In cryptography, a Secure Channel Protocol (SCP) is a way of transferring data that is resistant to overhearing and tampering. A confidential channel is a way of transferring data that is resistant to overhearing (i.e., reading the content), but not necessarily resistant to tampering. An authentic channel is a way of transferring data that is resistant to tampering but not necessarily resistant to overhearing. An SCP is used in smart cards to protect bidirectional communication between Java Card and Host. It is used as Mutual authentication and provide cryptographic protection for card and host subsequent communication. SCP provides the following three levels of security: Mutual authentication is achieved through the process of initiating a Secure Channel and provides assurance to both; card and host, that they are communicating with an authenticated entity. This process include the creation of new challenges and secure channel session keys. If any step in the mutual authentication process fails, the process shall be restarted, i.e. new challenges and Secure Channel Session keys shall be generated again. Data or message integrity is checked by comparing C-MAC received from off-card entity (Host) with the card internally generated C-MAC. Note that this comparison is done using same Secure Channel session key, generated in Mutual authentication step. The date received from host to card or card to host is not viewable by an unauthorized entity rather it is encrypted with Secure Channel session key generated during the mutual authentication process. Secure Channel Protocol ’02’ SCP02 uses Triple DES encryption algorithm in CBC mode with Initialization vector (IV) of binary zeros. As SCP02 uses 3DES in CBC mode with fixed IV of binary zeros therefore its encryption scheme is deterministic and not highly secure and thus vulnerable to a classical plaintext-recovery attacks. SCP02 relies on the «Encrypt-and-MAC» method, which means that it compute the MAC on the plain-text, encrypt the plain-text, and then append the MAC at the end of the ciphertext as shown in below diagram: SCP02 has been deprecated by GlobalPlatform. GlobalPlatform recommends that Card Content Management operations and applications relying on SCP02 confidentiality protection of static data shall adopt one of the possible mitigations: Encrypt all sensitive data transmitted in SCP02 using the Data Encryption Key (DEK) or any applet key. In early April 2018, GlobalPlatform announced in a Security Informative Note that the latest version of the Card Specification (v2.3.1) will set SCP02 as a deprecated feature. Eurosmart is committed in developing, promoting and maintaining the appropriate security level for its products, New York, NY, August 04, 2017 -- Versasec, the leader in smart card management systems, today introduced version 4.9 of its vSEC:CMS S-Series identity and access management solution. This updated version of the company's flagship product includes a variety
<urn:uuid:0d282b42-8078-43f6-bd04-5db6d3e7174e>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/secure-channel-protocol-scp01-scp02-scp03/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00489.warc.gz
en
0.908119
628
3.6875
4
Examples and descriptions of various common vulnerabilities Microsoft Windows, the operating system most commonly used on systems connected to the Internet, contains multiple, severe vulnerabilities. The most commonly exploited are in IIS, MS-SQL, Internet Explorer, and the file serving and message processing services of the operating system itself. A vulnerability in IIS, detailed in Microsoft Security Bulletin MS01-033, is one of the most exploited Windows vulnerabilities ever. A large number of network worms have been written over the years to exploit this vulnerability, including ‘CodeRed’. CodeRed was first detected on July 17th 2001, and is believed to have infected over 300,000 targets. It disrupted a large number of businesses, and caused huge financial losses around the world. Although Microsoft issued a patch for the vulnerability along with the MS01-033 security bulletin, some versions of the CodeRed worm are still spreading throughout the Internet. The Spida network worm, detected almost a year after CodeRed appeared, relied on an exposure in MS-SQL server software package to spread. Some default installations of MS-SQL server did not have a password on the ‘SA’ system account. This allowed anyone with network access to the system to run random commands. When using this exposure, the worm configures the ‘Guest’ account to allow file sharing and uploads itself to the target. It then uses the same MS-SQL password-less ‘SA’ account access to launch a remote copy of itself, thus spreading the infection. The Slammer network worm, detected in late January 2003, used an even more direct method to infect Windows systems running MS-SQL server: a buffer overflow vunerability in one of the UDP packet handling subroutines. As it was relatively small – 376 bytes – and used UDP, a communication protocol designed for the quick transmission of data, Slammer spread at an almost incredible rate. Some estimate the time taken for Slammer to spread across the world at as low as 15 minutes, infecting around 75,000 hosts. These three notorious worms relied on vulnerabilities and exposures in software running on various versions of Microsoft Windows. However, the Lovesan worm, detected on 11th August 2003, used a much more severe buffer overflow in a core component of Windows itself to spread. This vulnerability is detailed in Microsoft Security Bulletin MS03-026. Sasser, which first appeared at the beginning of May 2003, exploited another core component vulnerability, this time in the Local Security Authority Subsystem Service (LSASS). Information about the vulnerability was published in Microsoft Security Bulletin MS04-011. Sasser spread rapidly, and infected millions of computers world-wide, at an enormous cost to business. Many organizations and institutions were forced to suspend operations due to the network distruption caused by the worm. Inevitably, all operating systems contain vulnerabilities and exposures which can be targeted by hackers and virus writers. Although Windows vulnerabilities receive the most publicity due to the number of machines running Windows, Unix has its own weak spots. For years, one of the most popular exposures in the Unix world has been the ‘finger’ service. This service allows someone outside a network to see which users are logged on a certain machine or which location users are accessing the computer from. The ‘finger’ service is useful, but also exposes a great deal of information which can be used by hackers. Here’s what a sample of a remote ‘finger’ report looks like: Login Name Tty Idle Login Time Office Office Phone xenon pts/7 22:34 May 12 16:00 (chrome.chiba) polly pts/3 4d May 8 14:21 cracker DarkHacker pts/6 2d May 10 11:58 This shows that we can learn some interesting things about the remote machine using the finger server: there are three users logged in but two of them have been idle for more than two days, while the other one has been away from the computer for 22 minutes. Log-in names shown by the finger service can be used to try login/password combinations. This can quickly result in a system compromise, especially if users have based their passwords on their username, a relatively common practice. The fingers service not only exposes important information about the server it is hosted on; it has been the target of many exploits, including the famous network worm written by Robert Morris Jr, which was released on November 2nd 1988. Most modern Unix distributions therefore come with this service disabled. The ‘sendmail’ program, originally written by Eric Allman, is also another popular target for hackers. ‘Sendmail’ was developed to handle the transfer of email messages via the Internet. Due to the large number of operating systems and hardware configurations, ‘Sendmail’ grew into an extremely complex program, which has a long and notorious history of severe vulnerabilities. The Morris worm utilized a ‘sendmail’ exploit as well as the ‘finger’ vulnerability to spread. There are many other popular exploits in the Unix world which target software packages such as SSH, Apache, WU-FTPD, BIND, IMAP/POP3, various parts of the kernels etc. The exploits, vulnerabilities, and incidents listed above highlight an important fact. While the number of systems running IIS, MS-SQL or other specific software packages can by counted in the hundreds of thousands, the total number of systems running Windows is probably close to several hundred million. If all these machines were targeted by a worm or a hacker using an automated hacking tool, this would pose an extremely severe threat to the internal structure and stability of the Internet.
<urn:uuid:023e9fc1-2d55-41d4-a6b4-f874754d76b9>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/knowledge/vulnerabilities-examples/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00489.warc.gz
en
0.940089
1,165
3.578125
4
Keywords: Components, Component types, Grid Editor, context, what is a component Components are at the core of creating a workspace in Ardoq. We'll take you through them step-by-step, with a little help from our video tutorial. Let's get started! What Is a Component? Components are the core data elements in your workspace. A component must be of a type that belongs to a model. For the “Small application model” example, the top-level components can be of type Application. If the application has any child components; they can only be of the type Service: When a thing is identifiable by a single field value it can be a field; but when that field value itself has dependent field values it needs to be broken out as a separate component connected via a reference. Example: A Server component has a field on it called 'Operating System' e.g Windows, Linux. But when you want to record the support dates of the Operating System, then Operating System needs to be a component in its own right. Every Workspace has a Metamodel with a certain set of component types. The name, color, shape, or image of these can be changed to reflect the framework or the way your organization understands the architecture. Each component's description is the main content of a component or just “plain text” via the Rich Text editor. The Rich Text editor is shown in the Pages view by default; you can open the markdown editor from the Top Bar and watch your edits appear in real-time in the Pages view. How to Create Components The Grid Editor is the easiest way to enter data manually, including creating new Components. Open the Grid Editor by clicking the bar at the bottom of the screen. You can also pop out the Grid Editor in its own window by clicking on the pop-out button to the right: Click in the name field and start typing. Hitting the Enter key saves the new Component and moves you down to the next row where you can continue typing. Check out the article on the Grid Editor to learn all the details. Remember that everything in Ardoq is context-sensitive, so what you are creating is always based on the chosen component. If you want to create a component, then select the workspace for that component type. If you want to create a child of a component, then select that component. This hopefully helps you get started. If you still feel stuck or need more information, reach out to us via our website or by using the in-app chat.
<urn:uuid:1c78b1bb-4088-4824-9ff5-52ecad1e2d89>
CC-MAIN-2022-40
https://help.ardoq.com/en/articles/1910195-how-to-create-components
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00489.warc.gz
en
0.886402
537
2.578125
3
Letting a child run free on the internet is enough to give any parent an anxiety attack. It wasn’t that long ago – when the world was simpler and less digital – that parents could easily monitor and control the consumption of age-appropriate content for their children. If you wanted to buy a movie for your child, you could get all the information you needed from the rating label. Fast forward to today’s ever-connected world, where content is distributed via electronic networks and accessible to anyone with a mobile device or computer. How can parents know what content is suitable for their children, short of watching it themselves? There has been growing demand for parental advisory labels on online content, as parents want their children to reap the educational and entertainment benefits of the internet. However, this raises many challenges. Who decides what content is appropriate? How should parents know what categorization is used in other countries? What might be considered culturally age-appropriate in one country might be too risqué in another. Where is the clear, cross-border indication in an age where content is global? Forging a path through online data classification challenges The digitalization of parental advisory labels is stuck in the very early stages of development. There is no extensive use of digital parental advisory labels yet – especially on the open internet. Providers of online content and services face highly fragmented systems of age classification, depending on country, region and type of medium. So the European Commission (EC) decided to come up with a solution to this problem: the MIRACLE. The goal of the MIRACLE project is to provide a common information exchange reference model that enables cross-border, machine-readable classification data and age labels by content providers, filter software solution providers, and users. By making existing label approaches and classification schemes technologically universal to optimize established classification knowledge and data, this will not only result in cost synergies for content and filter software solution providers, but also enable new innovative services in providing classification data. It was important for the project consortium to spread across different member states and systems, and to include classification bodies, safer internet nodes, self-regulatory bodies, and filter software providers. Who benefits? Quite a few different parties: parents, content providers, manufacturers of end devices, and all media users – essentially anyone who needs to obtain more information about the content they are about to see. And as for the amount of potential use cases, the 48-hour MIRACLE Apps Hackathon provided just a glimpse of what’s possible. There’s ENTERTAIN, an app that allows a user to find movies or games that are suitable for a certain age through a very user-friendly interface. Or CleverAge, which recognizes a high variance in child development and uses a series of emotional and logical puzzles and quizzes to determine a user’s “real” maturity, granting access to appropriate content accordingly. As the internet becomes even more ingrained in our everyday use, it will continue to be a challenge for parents to protect their children from inappropriate content. Those looking to create a system of digital parental advisory labels would be wise to look to the EC’s MIRACLE program as a starting point. [su_box title=”About Moshe Elias” style=”noise” box_color=”#336588″][short_info id=’101383′ desc=”true” all=”false”][/su_box]
<urn:uuid:f9e67015-3e2f-4fb4-ac65-b4e7e7150f27>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/lessons-learned-european-commissions-miracle-online-child-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00489.warc.gz
en
0.919851
722
2.828125
3
No aspect of SARS-CoV-2 infection has been more startling – or problematic – than the elevated risk for blood clotting, a concern that throughout the pandemic has been associated with severe COVID-19, frequently characterized by clotting events that have led to stroke, heart attack and organ damage. But a team of medical scientists from a dozen institutions throughout the United States has made two key treatment-altering findings: Patients with moderate disease can also develop life-threatening clots, and need to be identified early and treated. The same team of medical investigators has pinpointed the most effective way to address the clotting problem, a finding that may have ramifications worldwide in the treatment of COVID-19 and its impact on the blood. Dr. Alex Spyropoulos, a professor at the Feinstein Institutes for Medical Research in Manhasset New York, found that moderately ill patients hospitalized after a COVID diagnosis and who had elevated blood levels of a protein known as d-dimer, were at especially high risk for dangerous clots. Spyropoulos, principal investigator of the study discovered along with his team, that treating these patients with a high-dose of the blood thinner called low-molecular weight heparin (LMWH) significantly reduced the potential for clot formation and death. The medication is readily available worldwide and is a cost-effective way to prevent clots and spare lives, the research found. The results are expected to be practice-changing, affecting the way clinicians identify and treat hospitalized COVID patients – potentially reversing the course of the disease and increasing survival. Reporting in JAMA Internal Medicine, Spyropoulos and colleagues sound a powerful note of caution–and hope–from the clinical study titled the HEP-COVID trial: Just because patients are hospitalized with moderate COVID symptoms doesn’t mean they’re at lower risk of life-threatening COVID-related blood clots. “The HEP-COVID trial was able to identify an exquisite biomarker – very elevated d-dimer – that not only predicted a high risk COVID-19 inpatient population, but those whose risk was ameliorated by early use of therapeutic heparin anticoagulation for thromboprophylaxis,” Spyropoulos told Medical Xpress, referring to clot prevention. D-dimer is a degradation product of fibrin, which is intimately involved in blood clotting. Fibrin is formed through the action of the protease thrombin on fibrinogen, which causes it to create a polymer. The polymerized fibrin combines with the tiny cells known as platelets and other cellular debris to form a clot. As a breakdown product, d-dimer is a small protein fragment in the blood after a clot has undergone enzymatic attack through fibrinolysis, a process that attempts to prevent clots from growing. The dimer gets its name, meanwhile, because, chemically, it has two ‘D’ fragments of the fibrin protein. The amount of d-dimer present in the blood can be determined by a blood test to help diagnose clots. Thrombosis refers to clot formation, obstructions that have been a pernicious cause of disability and death throughout the pandemic. For example, venous thromboembolism, or VTE, refers to clots that develop in a vein, such as in deep vein thrombosis. Pulmonary embolism refers to a clot that may travel to the lungs, and arterial thromboembolism, or ATE, describes clots that can cause ischemic stroke or myocardial infarction – heart attacks. All have been common among adults hospitalized because of COVID. “The virus can directly damage the endothelium causing endothelialitis, an immune response within the endothelium in blood vessels, as well as the host response, which in susceptible individuals, is manifested by a hyperinflammatory response and cytokine storm, activating the clotting system as well as platelets,” Spyropoulos said. Platelets are the tiny sticky disks that circulate in the blood and help the body form clots. They’re a welcome population of cells when plugging a wound – but potentially lethal when coalescing with other factors in the blood to form clots. The HEP-COVID randomized clinical trial recruited 253 hospitalized adult COVID patients with d-dimer levels more than four times the upper limit of normal. Patients also were recruited into the trial with a diagnosis of excessive clotting induced by sepsis. Patients were studied from May 8, 2020 through May 14, 2021 at 12 academic centers throughout the United States. Major thromboembolism and death in patients was 28.7 percent for those given a therapeutic-dose of low-molecular-weight heparin, which again, is a high dose measuring four times the current standard of care. Major thromboembolism and death were 41.9 percent for patients receiving standard of care doses, or intermediate-dose heparins, the study found. “The current standard has been [around] for the past 30 years or so in hospitalized medical patients – including those with pneumonia and sepsis,” said Spyropoulos, who worked with an additional international team in an arm of the study reported in the journal Thrombosis Research. The treatment benefit from low-molecular-weight-heparin therapy was not seen in patients who were critically ill requiring ICU care, a discovery during the research that Spyropoulos said can be understood within the context of the findings. “They were too far advanced in their hyperinflammatory/cytokine storm/coagulopathic state to see any treatment effects by simply increasing heparin dosing.” Regarding the data published in JAMA Internal Medicine, he noted that “HEP-COVID now presents compelling data that we should, in high-risk hospitalized patients with elevated d-dimers, use therapeutic doses of heparins to prevent clots. This is a major change in hospital guidelines.” Anticoagulation was associated with improved survival of hospitalized coronavirus disease 2019 (COVID-19) patients in large-scale studies. Yet, the development of COVID-19-associated coagulopathy (CAC) and the mechanism responsible for improved survival of anticoagulated patients with COVID-19 remain largely elusive. This investigation aimed to explore the effects of anticoagulation and low-molecular-weight heparin (LMWH) in particular on patient outcome, CAC development, thromboinflammation, cell death, and viral persistence. In this observational multicentre investigation of 586 patients hospitalized for COVID-19 in Austria, we were able to show that the use of LMWH was associated with curtailed viral persistence in COVID-19 leading to a reduction of virus shedding of 4 days. LMWH use war further associated with increased survival and diminished circulating markers of cell death, while no differences in biomarkers of CAC development and thromboinflammation were observed between LMWH users and non-users. Coagulation and the development of a hypercoagulable state play a central role in COVID-19 pathophysiology.3,13–17 In fact, CAC as depicted by D-dimer increase during hospitalization majorly evolved in patients requiring ICU and in non-survivors. While the mechanisms leading to CAC are a matter of ongoing investigations, increased D-dimer levels were postulated as a surrogate marker for presence of CAC, development of thrombotic complications, and ultimately as a predictor of mortality in COVID-194,18 As previously reported, analysis of D-dimer and other haemostatic biomarkers revealed that these markers are of limited prognostic value for prediction of mortality in COVID-19.19 This indicates that more specific biomarkers are necessary to reliably predict patient outcome. Thromboinflammation represents a central link between systemic hypercoagulability, respiratory failure, and mortality in COVID-19 patients, and NETs are frequently observed in CAC20,21 Investigation of post-mortem biopsies in COVID-19 patients showed occluding thrombi positive for citrullinated histone 3 as a specific marker for NETs not only in pulmonary tissue but also in kidney and cardiac tissue.20 In the present study, increased circulating markers of NET formation were observed in patients with higher COVID-19 disease severity at admission, which was paralleled by increased D-dimer levels, indicating that thromboinflammation might be a potential determinant of disease severity, as previously suggested.21 However, patients in the ICU and in the non-survivor sub-groups showed increased levels of H3Cit-DNA in circulation throughout the entire hospitalization. To our knowledge, this study is the first to assess markers of NET formation in a longitudinal approach, while previous studies focused on only one time point during disease onset. Intriguingly, we observed a general decrease in H3Cit-DNA regardless of outcome. This finding stands in clear contrast to the observed increase in D-dimer in patients requiring ICU treatment and non-survivors, suggesting a minor role of NET formation in CAC development or in later phases of COVID-19. In fact, there was no direct correlation of H3Cit-DNA and D-dimer in our study. Accordingly, while a role of NETs in COVID-19 pathophysiology could be validated, the contribution of thromboinflammation to CAC has to be questioned. Importantly, we found that anticoagulation was associated with improved survival upon multivariable Cox regression analysis including age, cardiovascular diseases, chronic renal insufficiency, and concomitant treatment with other drugs affecting the course of COVID-19 as potential confounders, as well as in all evaluated sub-groups. This finding is in line with previous observational studies in COVID-19 patients, showing comparable effects of prophylactic and therapeutic anticoagulation on survival of COVID-19 patients.6,7,17 The use of anticoagulation and LMWH in particular entered the guidelines for treatment of COVID-19 in July 2020. Accordingly, we observed an increase in probability of LMWH use throughout the study period (Supplementary material online, Figure S5). Importantly, we could not evaluate an influence of the inclusion time point on the association of LMWH and improved survival, even though the biggest proportion of patients in the no-LMWH sub-group was included in the beginning of the study period. Intriguingly, in our cohort, anticoagulation with either LMWH or NOAC was not associated with altered dynamics of D-dimer indicating a minor effect of LMWH on CAC development. Alternatively, D-dimer levels in patients who received LMWH could be suppressed to levels observed in patients who did not receive LMWH, which was previously suggested by Blasi et al.9 Nonetheless, the observations made in this longitudinal approach do not allow to link improved CAC or altered haemostasis in LMWH treated COVID-19 patients to the observed improved survival. Accordingly, we aimed for investigation of potential off target effects of anticoagulation. We observed a decrease in the cell death marker cfDNA in patients using LMWH and NOAC during hospitalization. This is of specific interest, as an increase in cfDNA was only observed in non-survivors. Noticeably, a correlation of cfDNA and D-dimer could already be observed at baseline, potentially linking cell death to haemostatic derangements. Yet, these findings have to be interpreted with caution due to the observational character and the low number of patients not receiving anticoagulation in this evaluated sub-group. Nonetheless, our data are suggestive for a potential protective role of anticoagulants in COVID-19 beyond haemostasis. In this context, direct factor Xa inhibitors and heparin were shown to reduce oxidative stress and to yield anti-inflammatory properties, thereby potentially altering the inflammatory environment in vivo and affecting cell death in COVID-19.22,23 Preservation of vascular integrity via inhibition of endothelial cell heparinase or impairment of hepcidin formation and concomitant reduction of hyperferritinaemia might be other potential mechanisms by which LMWH exerts its beneficial effects.24,25 Importantly, we observed curtailed SARS-CoV-2 viral persistence upon qPCR in patients treated with LMWH, when compared to patients without anticoagulation or those using NOAC. These data provide exploratory clinical evidence compatible with a direct effect of LMWH on virus pathology, which was previously suggested in in vitro studies, where heparin was found to interfere with SARS-CoV-2 binding on ACE2 expressing cells, thereby limiting its infectivity.26 The effect on viral persistence was specific for LMWH in our analyses and increased odds for curtailed SARS-CoV-2 infection culminated in a median 4-day reduction of viral shedding. While direct interaction of LMWH with SARS-CoV-2 binding is one potential mechanism explaining the observed viral dynamics,27 the underlying study design does not allow to evaluate the exact pathomechanism responsible for these observations. Notably, our study has certain limitations. In particular, the retrospective character of the study only allows to hypothesize a potential interaction of LMWH with SARS-CoV-2 which diminishes viral persistence. This limitation is further important for the interpretation of survival analyses, as the retrospective study design is associated with a notable amount of missing data. Accordingly, interventional studies are necessary to establish causality. In the light of the ongoing clinical trials, we aim to raise awareness for these potentially beneficial off-target effects of anticoagulants and encourage further research within this area. Moreover, we did not analyse effects of different doses of LMWH in our cohort, as only nine patients (2.2% of LMWH-treated patients) received therapeutic doses of LMWH and the statistical power for the respective analyses was not sufficient. Of note, previous reports were not able to assess a difference between prophylactic and therapeutic uses of LMWH in more than 4,000 patients.6 Further, recent data from interventional studies comparing therapeutic doses of LMWH to standard of care thromboprophylaxis showed a beneficial effect of high-dose treatments on hospital survival in non-critically ill patients, while critically ill patients did not benefit from these schemes.28,29 However, we cannot rule out that a difference in LMWH dosage might affect the data obtained for SARS-CoV-2 viral persistence in the present study. Ultimately, we want to point out that patients using NOAC were underrepresented in our cohorts, which renders the findings for this sub-cohort explorative and hypothesis generating. While we tried to take various possible confounders, for example comorbidities and age, into account, we cannot exclude that our data only reflect the situation in Austria and the virus mutations present. Taken together, the present investigation confirms an association of anticoagulants with improved survival of COVID-19 patients in a large Central European Multicentre Cohort and suggests a beneficial effect of LMWH use on SARS-CoV-2 viral persistence. While the exact pathomechanisms underlying these observations cannot be investigated due to the retrospective observational study design, the present study encourages the evaluation of viral persistence in randomized controlled trials assessing the effect of LMWH in COVID-19 patients in order to establish a causal relation of the presented findings. Limiting viral persistence, thereby shortening hospitalization and contagiousness is a relevant aspect during this pandemic. reference link : https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvab308/6381563 More information: Alex C. Spyropoulos et al, Efficacy and Safety of Therapeutic-Dose Heparin vs Standard Prophylactic or Intermediate-Dose Heparins for Thromboprophylaxis in High-risk Hospitalized Patients With COVID-19The HEP-COVID Randomized Clinical Trial JAMA Internal Medicine (2021) DOI: 10.1001/jamainternmed.2021.6203
<urn:uuid:720053ed-6570-41f9-b982-1109043e69fc>
CC-MAIN-2022-40
https://debuglies.com/2021/10/17/covid-19-treating-patients-with-a-high-dose-of-low-molecular-weight-heparin-lmwh-significantly-reduced-the-potential-for-clot-formation-and-death/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00689.warc.gz
en
0.941802
3,471
2.796875
3
Today is a historic day of celebration for all Americans and for women around the world. Kamala Harris has become the first woman and the first American of Black and South Asian ancestry to take the oath of office for this incredibly prestigious job. Her groundbreaking achievement exemplifies that gender and ancestry do not define a person’s potential. This groundbreaking achievement serves as a powerful reminder that with commitment and perseverance, dreams do come true. Let’s all pause to soak in this special moment and give tribute to the many whose fight for women’s equality paved the way to today. From Harriet Tubman to Susan B. Anthony, Rosa Parks, Prime Minister Thatcher, Hillary Clinton, Chancellor Merkel, each remarkable woman has shattered the glass ceiling to make this moment possible. Today, Harris shattered the glass ceiling in attaining the Vice Presidency. I look forward to seeing the many achievements that the next generation of women will make. Let’s continue to mentor and pave the way for our young girls so that they too can live their dreams. This is an inspirational reminder to continue mentoring and supporting young girls throughout their academic and work journeys so that they too can follow their dreams. In the words of our 49th Vice President, “If you are fortunate to have opportunity, it is your duty to make sure other people have those opportunities as well.”
<urn:uuid:98181c95-359e-45d0-9e6e-cadf40b14381>
CC-MAIN-2022-40
https://blogs.infoblox.com/company/a-shattered-ceiling-reflections-on-kamala-harris-inauguration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00689.warc.gz
en
0.94325
280
2.515625
3
It is the need of the hour that the industries, economies and people are interconnected to each other. Both at the personal and commercial level, we are getting connected with multiple devices through computers, mobile phones, appliances, commercial equipment and the list goes on. This network of connected devices is called the Internet of Things (IoT). The industry is on the verge of an explosion of IoT products and services coming to the market. However, the benefits of being connected to the internet comes with risks that should be managed to insure that, social and economic advantages are not turned into a potential disaster. Rapid increase in interconnectivity of personal, industrial and business systems have increased the vulnerability to cyber-attacks. According to the World Economic Forum, cybersecurity is one of the five most serious risks faced by the world today. No wonder most of the organizations in the world feel that they are more at risk of being attacked than a few years ago. High profile cases of cyber-attacks in the last few years have increased the demand for sophisticated cybersecurity solutions and companies across the globe have started to allocate more resources towards this domain to mitigate the risks. But is this really working out for them? The recent statistics of cyber-attacks do not say so. Rather, the incidences of cyber-attacks have seemed to proliferate, as cyber thieves are getting more prevalent by leveraging more sophisticated malware and building stronger organizations. So, does the biggest tech buzzword in last few years ‘blockchain’, come into the picture now? Blockchain for cyber security Many technical experts believe that blockchain could potentially improve the cyber security problems as the platform is based on mechanisms which are secure and transparent. A blockchain based cybersecurity platform can secure connected devices by using digital signatures to register them into the de-centralized network of computers. This information is highly de-centralized leaving no center point of attack for the hacker. Hence, to steal information from a blockchain network would be like a scenario in which the thief has to steal from thousands of banks simultaneously, without alerting anyone, which is practically impossible. And this the reason why no one in the world has been able to hack a bitcoin until now. More companies (mostly financial companies) are in a rush to implement blockchain technology because of its inherent resiliency to cyber-attack. Crypto-currencies are the biggest implementations of blockchain technology until now, but even they are facing certain problems in the implementation. Current blockchain-cybersecurity scenario Even if the blockchain technology projects a compelling proposition, there is still long way to go until it sees a serious enterprise adoption. Many domain experts agree that this technology has a lot to promise and it is marketed as a cure for cyber threats. But, currently, there are more problems caused than the solution blockchain provides. According to an article published by Help Net Security, cryptominers displaced ransomware for the number one spot of ‘detected malware threat’ in the first quarter of 2018. Illegal mining of cryptocurrencies off the blockchain system is a trending cybercrime at the present. Mike Orcutt in his article for MIT Technology Review has pointed out the ways to cheat the blockchain protocols by multiple ways. One them includes cryptominers gaining illegal access to various computers and using their computing power to mine cryptocurrencies. In May 2017, a ransomware named WannaCry was used by the hackers to demand 300 bitcoins. Another in 2013, ransomware named Cryptolocker was used to demand bitcoins worth $65,650. Thus, it is becoming clearer that even with the blockchain technology, serious cybercrimes can still exists and rather than being a savior of cybercrimes, blockchain itself can be a threat. But, does blockchain have anything else to offer? Blockchain can certainly be applied to a new and emerging movement in cybersecurity, called ‘Threat Intelligence’. What is threat intelligence? Cyber threat intelligence is an advanced process which involves gathering valuable insights including mechanisms, context, indicators, actionable advice and implications about an emerging or existing cyber threat. It can be tailored according to a specific company’s threat landscape which will then help the organization to proactively maneuver defense mechanisms in place, prior to an attack. Cyber security experts say that the amount of data that is generated daily on cyber threats is growing exponentially and will help organizations to better understand and defend against the new threats which are discovered every day. Even if threat intelligence is a noble pursuit, there are some issues with it. Current issues with threat intelligence Threat intelligence is the information used by organizations to analyze the risks of the most common and severe external threats such as advanced persistent threats, zero-day threats, and exploits. But there exist some problems in the industry which includes companies spending duplicate time in researching the same threats, while the other threats go unnoticed. Also, it is observed that most of the companies often have an insight of only a particular subset, which could be a particular geography or an industry sector. Due to this, companies often have to pay largely for the overlapping information or may have a risk of missing threats that are relevant to them. The noble cause of threat intelligence was theoretically put forward to achieve a defense mechanism of ‘us versus the cyber threats’. But the way the current scenario stands today is more of ‘some of the privileged versus the cyber threats’. According to a joint study by Phantom and Enterprise Strategy Group (ESG), 74% of the cyber security professionals ignore security alerts and events simply because they are too much to consume. The professionals cannot keep up with the security data overload. Ponemon Institute did a survey in which they found that almost 66% of IT leaders were not satisfied with the approaches of threat intelligence as the information was not timely. And 46% said that the information was not well categorized and has a room for improvement. A lot of these issues are thus clearly lined up for blockchain to intervene. When a threat comes along, all the necessary information surrounding the incident can be confused, complicated and overlooked. But blockchain can effectively layout precisely what happened. Basically, blockchain can bring the world to a common consensus of “what exactly happened”. The most difficult task among different participants was to trust each other. Blockchain’s ability to form a consensus amongst the peers opens exciting opportunities in the fields ranging from supply chain management, asset management to threat intelligence. Blockchain can make the threat intelligence marketplace fairer and less unjustifiably competitive by allowing users to access data on the basis of its merits and performance. However, it is essential to understand that blockchain is not a cure-all solution for cybersecurity needs, but is an important toolset for developers building next generation security applications. Blockchain will enable us to construct extremely robust and reliable records of events and will empower information sharing across the borders, by creating networks controlled by none, but verifiable and trusted by everyone.
<urn:uuid:47a1b72b-ee6b-4b72-aeae-c88cd1b0c29b>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/blockchain-for-threat-intelligence-maybe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00689.warc.gz
en
0.959511
1,395
3.140625
3
Consent, in its simplest form, is a data subject’s indication of agreement to his or her personal data being processed, and when treated as a real choice, allows data subjects to be in control of their personal data. As one of the core principles found in the FTC FIPPs, OECD Guidelines, and various data protection regulations (including the EU Data Protection Directive, ePrivacy Directive and upcoming EU General Data Protection Regulation (GDPR), the concept of consent has had a long history in privacy and data protection. These days, however, the concept has been evolving, and as stated by the Article 29 Working Party (WP29) in their recent draft Guidelines on Consent under the GDPR, “[t]he GDPR provides further clarification and specification of the requirements for obtaining and demonstrating valid consent.” In this article, we discuss the expanded requirements for consent found in the GDPR (along with a healthy mix of guidance from the WP29), and will recommend some steps that your organization can begin taking today to help prepare for the coming of the GDPR on 25 May 2018. Consent under the GDPR There are six legal bases for processing personal data under the GDPR.1 These legal bases include: consent, performance of a contract, compliance with a legal obligation, protection of vital interests of the data subject or another other natural person, performance of task in the public interest or exercise of official authority, or legitimate interests of the data controller or a third party. Consent has been one of the most common legal bases relied on for the processing of personal data. However, under the upcoming GDPR, additional conditions will need to be met which could make reliance on consent more difficult. For example, under the GDPR, the “opt-out” method of obtaining consent — i.e., processing personal data unless the data subject objects — will no longer be valid as it does not require a “clear affirmative action” on the part of the data subject.2 More on that later. Additionally, those who violate the GDPR’s consent requirements may be subject to administrative fines of up to 20 million euro or 4% of total worldwide annual turnover, whichever is higher, along with the possibility of individual member state penalties.3 For these reasons, and others (including moral, ethical and business considerations), getting consent practices right by 25 May will be critical. Elements of valid consent Under Article 4(11) of the GDPR, “’consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.” If we unpack that definition, we are left with the following four elements of a valid consent under the GDPR: 1) freely given, 2) specific, 3) informed, and 4) unambiguous. If any of these elements are missing, then the consent would be considered invalid. First, the data subject’s consent must be freely given. According to the WP29, this means that the data subject must be provided with “real choice and control” in a “granular” way over multiple purposes of processing, and be able to refuse or subsequently withdraw their consent in a manner that is as easy as it was to give consent.4 Additionally, consent typically will not be considered freely given where there is a “clear imbalance between the data subject and the controller” (e.g., in the employment context), or where performance of a contract is conditioned on consent to processing that is not necessary for the performance of that contract.5 Second, the consent must be specific — i.e., it must be tied to “one or more specific purposes” and the data subject must have a choice in relation to each.6 According to the WP29, to comply with this requirement, data controllers must ensure three things: 1) purpose specification as a safeguard against function creep, 2) granularity in consent requests, and 3) clear separation of information related to obtaining consent for data processing activities from information about other matters.7 Third, the consent must be informed. That is, adequate information about the processing must be communicated to the data subject “in an intelligible and easily accessible form, using clear and plain language” prior to obtaining their consent, to ensure that the data subject understands the choice before them and what they would be agreeing to.8 The importance of this element is well-stated by the WP29: “[i]f the controller does not provide accessible information, user control becomes illusory and consent will be an invalid basis for processing.”9 Finally, the consent must be unambiguous. The data subject must indicate their wishes “by a statement or by a clear affirmative action” signifying their agreement to the processing of their personal data.10 According to the WP29, this means that consent “must always be given through an active motion or declaration” and be “obvious that the data subject has consented to the particular processing.”11 For these reasons, “[s]ilence, pre-ticked boxes or inactivity should not therefore constitute consent.”12 In some circumstances, explicit consent may be needed. For instance, explicit consent is one of the exceptions to the GDPR’s prohibition on processing of special categories of data, and is one of several derogations for the transfer of personal data to third countries.13 It can also be used to justify the making of decisions about data subjects based solely on automated processing, including profiling, which produce legal effects concerning, or that significantly affect, the data subject (i.e., “automated individual decision-making”).14 Under Article 7(1) of the GDPR, data controllers must also be able to “demonstrate that the data subject has consented to processing of his or her personal data.” According to the WP29, “[c]ontrollers are free to develop methods to comply with this provision in a way that is fitting in their daily operations.”15 However, data controllers should not collect more information than is necessary, nor keep it for longer than is necessary, to meet this requirement16 — i.e., the principles of data minimization and storage limitation should always be in the back of the privacy professional’s mind when thinking about how to demonstrate compliance with the GDPR. Consent management for data controllers OK, now that we have rules, requirements and obligations out of the way, let’s shift gears and talk about some steps that your organization can take (in particular where it acts as a data controller) to support efforts to comply with these requirements. Identify what you have First, the building of records of processing (or “data map”) to address Article 30 is viewed by many organizations as the natural first step toward GDPR compliance, as these records can serve as the foundation for tackling many other GDPR requirements. Once your organization has developed these records, they can be used to identify those processing activities that rely on consent and then supplement the records with additional details about how consent was presented, obtained, etc. Understanding where your organization has relied on consent should be the first step toward ensuring that those consents meet legal requirements. Therefore, when building your records, consider adding additional fields to support information about legal bases, and the different aspects of those legal bases. Doing so should provide you with a head start and strong foundation for addressing other requirements. Assess for appropriateness According to the WP29, “[g]enerally, consent can only be an appropriate lawful basis if a data subject is offered control and is offered a genuine choice with regard to accepting or declining the terms offered or declining them without detriment.” So, after you have identified those processing activities that rely on consent, consider asking yourself the following question: is there another legal basis under Article 6 that is more appropriate for this processing activity? Situations where consent may not be the best choice may include: where, regardless of refusal or withdrawal of consent, the personal data would still be processed under a different legal basis; where there is no way to avoid real detriment to the data subject in the event of refusal or withdrawal; where there is an imbalance of power between the data controller and data subject; or where the processing is necessary for the performance of a contract. In the context of an employment agreement for example, perhaps it would be better to rely on performance of a contract as the legal basis for processing the employee’s personal data, given the imbalance of power that is inherent in the employer-employee relationship. Or, where a bank wishes to process the personal data of its customers for the separate purpose of preventing fraud, relying on legitimate interests might be the more appropriate choice. Craft your request and provide adequate information Under Article 7(2), where a “data subject’s consent is given in the context of a written declaration which also concerns other matters, the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language.” This requirement is tied heavily to the requirements that consent be specific and informed. Thus, the request needs to apply the principle of purpose specification; be granular and unbundled in terms of how consent for separate purposes is presented; be clearly separated from information about other matters (e.g., terms and conditions); and provide an adequate level of detail to inform the data subject; all while being intelligible, easily accessible, clear and plain to the data subject. To put this into practice, try to craft your message in a way that is “easily understandable for the average person and not only for lawyers.”17 Avoid technical and legal jargon and try to limit the information to that which is “relevant for making informed decisions on whether or not to consent”18 to avoid taking the data subject to the point where they are overwhelmed with information or give up on reading the notice entirely. Pursuant to this aim, use of layered and just-in-time notices should be favored over lengthy, hard to find notices. As previously discussed, the GDPR requires that data controllers must be able to demonstrate that valid consent was obtained from the data subject. The WP29 states that one way of doing this is to “keep a record of consent statements received” in order to show how and when consent was obtained, what information was provided to the data subject, and the workflow behind ensuring that the consent included each of the requisite elements.19 In practice, this could look something like this: For example, in an online context, a data controller could retain information on the session in which consent was expressed, together with documentation of the consent workflow at the time of the session, and a copy of the information that was presented to the data subject at that time. It would not be sufficient to merely refer to a correct configuration of the respective website.20 At OneTrust, we commonly analogize this to a credit card receipt. The receipt serves as evidence that the transaction took place—without it, it is as if the exchange never happened. The concept is the same here — no receipt means no consent. To address this, organizations can opt to use automated consent management tools that allow for embedding consent management directly into their websites, products, and internal systems, while capturing consent records along the way. Renew and refresh where applicable The GDPR does not specify how long a data subject’s consent will remain valid, or whether they degrade over time and/or expire. However, the WP29 has stated that “[h]ow long consent lasts will depend on the context, the scope of the original consent and the expectations of the data subject”; however, “[i]f the processing operations change or evolve considerably then the original consent is no longer valid.”21 Therefore, the WP29 recommends “as a best practice that consent should be refreshed at appropriate intervals.”22 Even if you disagree with interpretation, at the very least, refreshing consents will lend itself to ensuring that data subjects remain informed. Tools that provide visibility into your organization’s various processing activities and how they evolve over time may be helpful for tracking the validity of consents and whether they need to be refreshed. Furthermore, if consents for a particular processing activity do need to be refreshed, consent management tools can be used to communicate that to the data subject. Enable consent withdrawal Under Article 7(3), data subjects have the right to withdraw consent at any time, and withdrawing consent must be as easy as giving it. This means, for example, that “when consent is obtained via electronic means through only one mouse-click, swipe, or keystroke, data subjects must, in practice, be able to withdraw that consent equally as easily.”23 While the GDPR does not specify that giving and withdrawing consent must be able to be achieved through the same means, according to the WP29, “[w]here consent is obtained through use of a service-specific user interface … there is no doubt a data subject must be able to withdraw consent via the same electronic interface, as switching to another interface for the sole reason of withdrawing consent would require undue effort.”24 There are many ways in which this can be achieved — e.g., via web-interface, unsubscribe link, phone call, online preference management, etc. — but, at the end of the day, the goal is to ensure that the chosen process allows for data subjects to withdraw their consent in a way that is no more burdensome than the act of giving consent. It is important to note that a data subject’s withdrawal of consent does not necessarily mean that the data controller must cease processing and/or erase the personal data. If there is another legal basis that would apply under Article 6, the data controller may “migrate from consent (which is withdrawn) to this other lawful basis” (e.g., performance of a contract).25 However, the data subject must be notified of this change in legal basis in accordance with the information provision requirements found in Articles 13 and 14. For these reasons, “it is very important that controllers assess the purposes for which data is actually processed and the lawful grounds on which it is based prior to collecting the data. … Controllers should therefore be clear from the outset.”26 Where your organization considers relying on consent for a particular processing activity, make sure to also consider what other lawful grounds might apply — perhaps another is more appropriate, or could serve as a substitute basis in the event that consent is withdrawn. Synchronize, combine and conquer As mentioned earlier, the records of processing that you create for compliance with Article 30 can be tweaked to include information about legal basis. You can track what processing activities rely on consent and include details about how consent is obtained, as well as notes on what other legal bases might be appropriate for a given processing activity. Vice-versa, the consent records that you generate for compliance with Article 7 can be tied back to your records of processing or data subject requests to assist with compliance in those areas, such as if you needed to quickly trace a consent back to a specific processing activity where the data subject has requested withdrawal of their consent and subsequent erasure of their personal data under Article 17(b). We hear from privacy professionals every day who are working to prepare for the GDPR and looking for solutions. It can feel overwhelming at times — the seemingly endless requirements found in the GDPR, combined with the growing number of “tips,” “steps” and “recommendations” being thrown at them every day (yes, the authors are self-aware). How does the privacy professional balance it all? If there is a main takeaway from our discussions with organizations who come to us with these concerns, it is this — remember that the GDPR takes a risk-based approach to compliance, is meant to be an ongoing and evolving exercise in privacy and data protection, and that there are many areas of overlap within the GDPR that can be leveraged to your advantage. With the right mindset, tools and some creativity, you can be ready for the GDPR and beyond.
<urn:uuid:6ca79a8a-e34b-4fbf-b980-eecc0e4fa49a>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-protection/consent-under-gdpr-requirements-recommendations-data-controllers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00689.warc.gz
en
0.943101
3,434
2.625
3
July 27, 2021 Data comes at us fast and in many forms. These different forms can include structured, semi-structured, and unstructured data and many people do not realize that a data warehouse and a data lake handle the data differently. A modern data estate should provide multiple methods of ingesting and storing the various data that businesses generate. Data comes at us fast and in many forms. These different forms can include structured, semi-structured, and unstructured data and many people do not realize that a data warehouse and a data lake handle the data differently. Let’s look further at these different types of data: - Structured – traditional databases such as the transactional database for your ERP or CRM system with formal column and table definitions - Semi-Structured – files such as XML or JSON that are self-describing with tags for elements and hierarchies - Unstructured – images, video, audio, and other binary data Traditional data warehouse designs have been around for many decades while the concept, or at least the term, data lake is a somewhat newer construct. Each of these has a place in your organization’s data estate. The Data Warehouse As we can see above, data sources can be very diverse and have different data representations, which can lead to divergent information. In addition, the large variety of schemas and structures in data sources makes it difficult to obtain consolidated information when a complete snapshot of the data is required from all business sub-systems. In general, this is the main reason for the emergence of Data Warehouse solutions. A data warehouse is a formal design, frequently based on design guidelines that implements for formal ETL (Extract-Transform-Load) process to consume raw, structured data sets and load them into a model designed for reporting. Data warehouses are built on relational databases like Azure Synapse, previously Microsoft SQL Server. Azure Synapse is designed to store structured data into tables with traditional rows and columns but does have the capability to store semi-structured data like XML and JSON. The Data Lake A data lake flips the concept of ETL on its head and implements an ELT (Extract-Load-Transform) process. Ingesting data into the data lake is essentially just throwing everything you think may be valuable at some point into a large storage area regardless of data type or structure. Data lakes can store structured, semi-structured, and unstructured data. Data lakes delivered in Microsoft Azure are built on storage accounts with Data Lake Storage Gen2 enabled when creating the storage account. The thought behind a data lake is you want to consume all the data and will sort through it at a later point while the data warehouse requires identifying the value upfront with significant investment developing the ingestion. Due to the heavy, upfront investment typically required to develop a data warehouse, if it is later determined that you need data that wasn’t brought in initially, there is a risk the source data is no longer available and potentially gone forever. Purpose: undetermined vs in-use The purpose of individual data pieces in a data lake is not fixed. Raw data flows into a data lake, sometimes with a specific future use in mind and sometimes just to have on hand. This means that data lakes have less organization and less filtration of data than their counterpart. Processed data is raw data that has been put to a specific use. Since data warehouses only house processed data, all of the data in a data warehouse has been used for a specific purpose within the organization. This means that storage space is not wasted on data that may never be used. Accessibility and ease of use refers to the use of data repository as a whole, not the data within them. Data lake architecture has no structure and is therefore easy to access and easy to change. Plus, any changes that are made to the data can be done quickly since data lakes have very few limitations. Data warehouses are, by design, more structured. One major benefit of data warehouse architecture is that the processing and structure of data makes the data itself easier to decipher, the limitations of structure make data warehouses difficult and costly to manipulate. The Benefits of Both Data lakes are a cost-effective way to store large amounts of data from many sources. Allowing data of any structure reduces cost because data is more flexible and scalable as the data does not need to fit a specific pattern. However, structured data is easier to analyze because it is cleaner and has a uniform schema to query from. By restricting data to a schema, data warehouses are very efficient for analyzing historical data for specific data decisions. Both a proper data warehouse and a data lake are critical to the future success of your organization and belong in your modern data estate. What is a Data Estate? Establishing a modern data estate is a foundational step toward digital transformation. A modern data estate enables timely insights and decision-making across all your data and sets the foundation for AI. A data estate is all of the data an organization owns. When you migrate this data to the cloud or modernize your environment on-premises you can gain important insights to fuel innovation. Microsoft Dynamics 365 Pre-Built Data Warehouse, DataCONNECT Building a data warehouse can be very expensive and time-consuming to properly review your source systems, design a data model, and create the necessary ETL to process it. MCA Connect developed our DataCONNECT Data Warehouse solution for Microsoft Dynamics AX, Dynamics 365 Finance, and Customer Engagement. This solution greatly accelerates the timeline for the delivery of a comprehensive data warehouse solution while reducing implementation costs. It is also a great way to start building your comprehensive data estate. DataCONNECT can fuel organizations with fast, accurate information, giving them the ability to predict, adapt and shape operations with precision. You will be able to quickly pull validated data into forecasting models, so you can begin your planning cycles for areas of your business. If you’d like to learn more about how the DataCONNECT Data Warehouse or a data lake can help your company store big data, contact us. One of our experts will be glad to guide you in the right direction. The content & opinions in this article are the author’s and do not necessarily represent the views of Manufacturing Tomorrow.
<urn:uuid:339d0959-62d9-43ba-b8ea-9c15e78420fd>
CC-MAIN-2022-40
https://internetofbusiness.com/the-modern-data-estate-data-lake-vs-data-warehouse-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00689.warc.gz
en
0.931944
1,295
2.65625
3
According to the Journal of Data Science(JDC): “By ‘Data Science’ we mean almost everything that has something to do with data.” Today, data science has overtaken nearly every industry on the planet. There isn't a single industry in the world now that isn't reliant on data. As a result, data science has become a source of energy for businesses. Data Science Applications haven't taken on a new function overnight. We can now anticipate outcomes in minutes, which used to take many human hours to process, because of faster computers and cheaper storage. Future-oriented questions are addressed by data scientists. They begin with big data, which has three characteristics: volume, variety, and velocity. The information is then used to feed algorithms and models. Models that autonomously self-improve, recognizing and learning from their failures, are created by the most cutting-edge data scientists working in machine learning and AI. Data science, often known as data-driven science, combines several aspects of statistics and computation to transform data into actionable information. Data science combines techniques from several disciplines to collect data, analyze it, generate perspectives from it, and use it to make decisions. Data Mining, statistics, machine learning, data analytics, and some programming are some of the technical disciplines that make up the data science field. ( Interested in Data Science? Read this: Skills required to excel as Data Scientist ) You can sneak a glance at the video below to understand what Data Science is Data science is quickly becoming one of the most in-demand disciplines, with applications in a wide range of sectors. We know it has been revolutionizing the way we perceive data. Applications of Data Science in different sectors We have rounded up some applications of Data Science in action. Let's explore them : Different Sectors utilizing Data Science The healthcare industry, in particular, benefits greatly from data science applications. Data science is making huge strides in the healthcare business. Data science is used in a variety of sectors in health care. - Image Analysis in Medicine - Virtual Assistants and Health bots ( Also Read - How Data Science is simplifying Healthcare? ) To discover ideal parameters for jobs like lung texture categorization, procedures like detecting malignancies, artery stenosis, and organ delineation use a variety of methodologies and frameworks like MapReduce. For solid texture classification, it uses machine learning techniques such as support vector machines (SVM), content-based medical picture indexing, and wavelet analysis. Through genetics and genomics research, Data Science applications also offer a higher level of therapy customization. The objective is to discover specific biological links between genetics, illnesses, and medication response in order to better understand the influence of DNA on human health. ( Also Read - Introduction to EpiGenetics ) From the first screening of medicinal compounds through the prediction of the success rate based on biological variables, data science applications and machine learning algorithms simplify and shorten this process, bringing a new viewpoint to each stage. Instead of "lab tests," these algorithms can predict how the chemical will behave in the body using extensive mathematical modelling and simulations. The goal of computational drug discovery is to construct computer model simulations in the form of a physiologically appropriate network, which makes it easier to anticipate future events with high accuracy. Virtual Assistants and Health bots Basic healthcare help may be provided via AI-powered smartphone apps, which are often chatbots. You just explain your symptoms or ask questions, and you'll get vital information about your health condition gleaned from a vast network of symptoms and effects. Apps can help you remember to take your medication on time and, if required, schedule a doctor's visit. ( Suggested Reading - Revolutionary Innovations in AI ) If you thought Search was the most important data science use, consider this: the whole digital marketing spectrum. Data science algorithms are used to determine virtually anything, from display banners on various websites to digital billboards at airports. This is why digital commercials have a far greater CTR (Call-Through Rate) than conventional marketing. They can be tailored to a user's previous actions. That’s the reason why you may see advertisements for Data Science Training Programs while someone else would see an advertisement for apparel in the same area at the same time. Learn More about CTR - ( Call- Through Rate) Many businesses have aggressively exploited this engine to advertise their products based on user interest and information relevancy. This method is used by internet companies such as Amazon, Twitter, Google Play, Netflix, Linkedin, IMDb, and many more to improve the user experience. The recommendations are based on a subscriber's prior search results. ( Related: How Netflix maximizes Customer Experience ) The e-commerce sector benefits greatly from data science techniques and machine learning ideas such as natural language processing (NLP) and recommendation systems. Such approaches may be used by e-commerce platforms to analyse consumer purchases and comments in order to gain valuable information for their company development. They utilise natural language processing (NLP) to examine texts and online questionnaires. To evaluate data and deliver better services to its consumers, it is utilised in collaborative and content-based filtering. ( Suggested Read : Applications of NLP ) Recognizing the consumer base, predicting goods and services, identifying the style of popular items, optimizing pricing structures, and more are all examples of how data science has influenced the data science industry. In the field of transportation, the most significant breakthrough or evolution that data science has brought us is the introduction of self-driving automobiles. Through a comprehensive study of fuel usage trends, driver behavior, and vehicle tracking, data science has established a foothold in transportation. It is creating a reputation for itself by making driving situations safer for drivers, improving car performance, giving drivers more autonomy, and much more. Vehicle makers can build smarter vehicles and improve logistical routes by using reinforcement learning and introducing autonomy. According to ProjectPro, Popular cab services like Uber employ data science to improve price and delivery routes, as well as optimal resource allocation, by combining numerous factors such as consumer profiles, geography, economic indicators, and logistical providers. ( Also Read: Top Self Driving Car Companies ) The airline industry has a reputation for persevering in the face of adversity. However, a few airline service providers are striving to maintain their occupancy ratios and working benefits. The necessity to give considerable limitations to customers has been compounded by skyrocketing air-fuel costs and the need to offer reductions in air-fuel expenses. It wasn't long before airlines began employing data science to identify the most important areas for improvement. Airlines may use data science to make strategic changes such as anticipating flight delays, selecting which aircraft to buy, planning routes and layovers, and developing marketing tactics such as a customer loyalty programme. Text and Advanced Image Recognization Speech and picture recognition are ruled by data science algorithms. In our daily lives, we can see the wonderful work of these algorithms. Have you ever needed the help of a virtual speech assistant like Google Assistant, Alexa, or Siri? Its speech recognition technology, on the other hand, is working behind the scenes, attempting to comprehend and evaluate your words and delivering useful results from your use. Image recognition may be found on Facebook, Instagram, and Twitter, among other social media platforms. When you post a photo of yourself with someone on your profile, these applications offer to identify them and tag them. ( Related blog - How AI and Big Data is used in Instagram ) Machine learning algorithms are increasingly used to create games that grow and upgrade as the player progresses through the levels. In motion gaming, your opponent (computer) also studies your past actions and adjusts its game appropriately. EA Sports, Zynga, Sony, Nintendo, and Activision-Blizzard have all used data science to take gaming to the next level. ( Also Read - Game Development Companies Integrating AI research ) Data science may be utilized to improve your company's security and secure critical data. Banks, for example, utilize sophisticated machine-learning algorithms to detect fraud based on a user's usual financial activity. Because of the massive amount of data created every day, these algorithms can detect fraud faster and more accurately than people. Even if you don't work at a financial institution, such algorithms can be used to secure confidential material. Learning about data privacy may help your firm avoid misusing or sharing sensitive information from consumers, such as credit card numbers, medical records, Social Security numbers, and contact information. ( Also Read - Best Data Security Practices ) Finance was an early adopter of data applications. Every year, businesses were fed up with bad loans and losses. They did, however, have a lot of data that was acquired during the first application for loan approval. They decided to hire data scientists to help them recover from their losses. Finance and data science are inextricably linked since both are concerned with data. Companies used to have a lot of paperwork to start authorizing loans, keeping them up to date, suffering losses, and being in debt. As a result, data science methods were proposed as a remedy. They learned to segregate the data by consumer profile, historical expenditures, and other required characteristics in order to assess risk possibilities. It also aids in the promotion of banking products depending on the purchasing power of customers. Another example is customer portfolio management, which uses business intelligence tools for data science to evaluate data patterns. Data science also provides algorithmic training; financial organizations may use rigorous data analysis to make data-driven choices. As a result, making customer experiences better for consumers, as financial institutions may build a tailored relationship with their clients through thorough research of client experience and adjustment of preferences. ( How to Ensure Security while doing Digital Payments ) Different Industrial fields employing Data Scientists Data on your clients may offer a lot of information about their behaviors, demographics, interests, aspirations, and more. With so many possible sources of consumer data, a basic grasp of data science may assist in making sense of it. For example, you may collect information on a customer every time they visit your website or physical store, add an item to their basket, make a purchase, read an email, or interact with a social network post. After you've double-checked that the data from each source is correct, you'll need to integrate it in a process known as data wrangling. Related blog - Customer Behavioral Analytics Matching a customer's email address to their credit card information, social media handles, and transaction identifications is one example of this. You may make inferences and discover trends in their behavior by combining the data. Understanding who your consumers are and what drives them may help you guarantee that your product fulfills their needs and that your promotional strategies are effective. This is the last of the data science applications that appear to have the most potential in the future. Augmented reality is a term that refers to one of the most exciting uses of technology Because a VR headset incorporates computer expertise, algorithms, and data to provide you with the greatest viewing experience, Data Science and Virtual Reality have a connection. The popular game Pokemon GO is a modest step in the right direction. The ability to wander about and gaze at Pokemon on walls, streets, and other non-existent objects. To determine the locations of the Pokemon and gyms, the game's designers used data from Ingress, the company's previous software. Data Science, on the other hand, will make more sense if the VR economy becomes more affordable and consumers begin to utilize it in the same way they do other applications. ( Also Read: Applications of AR in Healthcare ) These aren't the only domains where data science may be used. Apart from these uses, data science is used in marketing, finance, and human resources, healthcare, government programmes, and any other industry that generates data. Marketing departments use data science to determine which product is most likely to sell. Data can provide insights, drive efficiency initiatives, and inform forecasts when critical thinking meets machine-learning algorithms. Understanding how to evaluate data sources, clean and organize information, and draw conclusions can be important skills in your job even if you aren't a data scientist. Let's wait to uncover what more incredible data science applications the future has in store for us!
<urn:uuid:84736940-c91e-4220-b871-c6c5d22eb110>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/10-data-science-applications-real-life
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00689.warc.gz
en
0.928832
2,590
3.09375
3
Skilled tech talent is vital for the future of the tech industry at large, Micro Focus and its customers and partners are no exception. Technology is moving at speed and without the next generation of Science, Technology, Engineering & Maths (STEM) graduates and upskilled groups making their way into the workplace, the war for talent continues to threaten industries and global economies alike. As leaders in the space, we have a responsibility to ensure that the technologies, which are increasingly shaping our lives, represent all of us. That’s why, during the Micro Focus #virtualUniverse a panel of technology leaders took part in an online panel “Driving Business Success – Diversity & Digital Transformation” to examine what companies are doing to help increase the level of diversity in our industry, from the classroom to the boardroom. Here are the highlights: Engaging with young girls as early as possible is key to helping inspire and influence their perceptions, study and career choices. PWC’s Tech She Can program was cited as a great example of a collaborative approach. Designed to help address gender imbalance in the technology industry, The Tech She Can Charter is a commitment by organizations to work together to increase the number of women working in technology roles. It aims to tackle the root cause of the problem at a societal level by inspiring and educating young girls and women to get into tech careers and sharing best practice across the organizations involved. You need to see it to be it Role models are critical. Everyone can make a difference. The Micro Focus INSPIRE program which is designed to help communities acquire the right skills to thrive in a digital world with a big focus on inclusion and diversity provides employees with opportunities to volunteer in schools and community groups to talk about the exciting careers and pathways in technology. Career paths can be wiggly and/or ‘straight lines’ As we heard from our panellists, there are various routes into tech. One left school at 16 with dyslexia and started a career as a model, one aspired to be a ballerina and the other a vet before all three finally entered the tech industry where today they hold senior leadership roles. Understanding Unconscious Bias is key We all have unconscious bias. While it is completely natural and unintended, it can and does affect our decisions particularly in recruiting and managing teams. However, it can be mitigated and even removed through active awareness, deliberate action, and by developing new behaviors. To listen to the full discussion please watch the embedded video directly below. Feel free to find me on Twitter to talk directly.
<urn:uuid:f835da69-a873-48a7-bc9c-82e2b9137d7a>
CC-MAIN-2022-40
https://blog.microfocus.com/diversity-digital-transformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00689.warc.gz
en
0.95504
534
2.546875
3
By Allen Badeau, Vice President and CTO, NCI Information Systems, Inc. What role will artificial intelligence (AI) play in homeland security and defense? AI is rapidly emerging as one of our most essential national security tools, with far-reaching applications for cybersecurity, intelligence and military readiness. But the rise of the machines doesn’t mean that humans are any less important. To the contrary—the biggest value of AI may come in its potential to augment human abilities. AI tools are available now that can help the people protecting our country detect and respond to emerging threats, make better decisions and prepare for evolving security challenges. Scaling Human Performance with AI When people hear “AI”, most think of sci-fi scenarios featuring fully autonomous, self-aware machines. While some forms of AI are capable of independent decision making and action within the scope of their programming, many of the most exciting applications of AI today pair computers and people to amplify the power of both. At NCI, we are leveraging machine learning, robotic process automation (RPA) and other AI technologies to scale human performance—in other words, to help people do more with less effort. NCI’s approach keeps humans at the center of AI development and deployment. Our work with government agencies ranges from military training exercises to detection of fraud, waste and abuse in billing data. Now, we have teamed up with Tanjo, Inc. to infuse new machine learning and data analytics capabilities into NCI’s AI’s platform, Shai (Scaling Humans with Artificial Intelligence). Combining Shai with the Tanjo Enterprise Brain and Tanjo Automated Personas (TAP) will open up new possibilities for advanced document categorization, threat detection and development of complex AI personas for national security applications. A human-centric approach to AI has several applications for homeland security and defense. Threat Detection and Response AI makes it possible to comb through millions of data points to look for trends and patterns not apparent to the human eye. With machine learning, algorithms develop statistical models using information gleaned from training data sets. These models are then applied to detect specific patterns in real-world data. In the security realm, this means training AI using machine learning to detect patterns in data that indicate an attack or emerging threat. The most obvious and immediate application is for cybersecurity. AI can monitor vast amounts of digital traffic and detect even small signals indicating a possible cyber attack. The same methods can be applied to detect threats in other types of data, such as intelligence data, social media traffic or other available data sources. In some cases, the AI may also be programmed to respond directly to certain types of attacks. But in others, a “human in the loop” approach, which escalates the data to a human expert for verification and response, is preferable. While humans do not have the information processing ability of AI, some types of decisions are better left to humans who can put the data into a broader context and apply moral, ethical, political or psychological judgments. Threat Profiling and Modeling In addition to detecting current threats, machine learning can be applied to develop profiles of possible future threats. For example, what patterns of behavior indicate that a person may be an insider threat? What indicators can we use to determine whether a region is gearing up for serious conflict or simply engaging in routine political posturing? AI can help us understand how different behaviors, characteristics and indicators are correlated with specific types of threats. Developing models of what these threats look like as they are emerging will help the humans involved better understand, recognize, respond to and potentially avert these threats. We can also develop AI “personas” based on specific threat profiles. These personas are given capabilities, goals and even “personalities.” The personas can then be thrown into simulated scenarios. By running thousands of simulations with different personas, we can start to predict how people with similar characteristics may behave under different circumstances in the real world. AI can reduce the administrative and cognitive burden for analysts and others working to support military missions and security initiatives. Using AI tools such as RPA and machine learning, we can automate much of the information gathering and initial analysis needed for effective decision making. For example, we can train an AI assistant to help an analyst responsible for interviewing people as part of a security investigation. The AI can pull together all of the documentation the analyst needs and perform the first level of analysis. This could involve categorizing documents and other digital information, identifying suspicious patterns that require further investigation, and prioritizing who should be interviewed and what questions should be asked. Automating the more routine and time-consuming parts of the investigative process allows analysts to focus their time and energy more efficiently on areas that require human judgment and expertise. An AI-augmented review process saves time for busy analysts, enables more rapid response, and reduces the cost of investigations. Training is another area where we can leverage AI to get better results with less time and expense. Real-world training exercises are expensive and time consuming. However, getting hands-on experience in applying skills and responding to realistic scenarios is an essential part of learning. This is especially important for people who will be placed in complex, high-pressure situations, such as emergency responders, soldiers and contractors working in battlefield conditions, and the people on the front lines of homeland security initiatives. With AI, we can build complex, realistic virtual training scenarios that closely mimic the situations trainees will face in the real world. These virtual training worlds can be populated with dozens, hundreds or even thousands of AI personas filling specific roles. These roles may include teammates, adversaries, superiors, direct reports, civilian victims or any other type of player the scenario requires. Unlike the static “non-player characters” of the video games of yesterday, which had a limited repertoire of responses, these AI personas can be programmed to respond in complex, realistic and even unpredictable ways. This provides a more meaningful and effective training experience for human trainees. AI-based virtual training can be used to expose trainees to experiences that are impossible, unethical or prohibitively expensive to simulate in the real world. Trainees can run through variations of the training as many times as required to cement the skills or decision making abilities we want to enhance. This type of virtual training will facilitate military and workforce readiness by getting personnel up to speed quickly and better preparing them for the challenges they will face in their missions. Preparing for an AI-Driven Security Landscape These AI technologies exist now and are already being deployed in some agencies. Over the next few years, we can expect AI’s role in national security and defense to continue to grow and evolve. As we expand AI’s role in our security processes, it will be important to monitor its use and impact to avoid potential mistakes and misuse. AI is programmed by humans, and is therefore subject to human biases. Biases in the algorithms or poor quality training data can compromise results or lead to injustices, such as profiling that unfairly targets people based simply on race, ethnicity, country of origin, religion or sexual preference. Human judgment, review and correction will be required to ensure that results produced by AI are both accurate and fair. The American AI Initiative has made development and deployment of AI a national priority—and for good reason. AI capabilities will be a decisive factor in military readiness and homeland security in the coming years. It is vitally important that we continue to nurture AI capabilities and expertise within American companies and our workforce. Doing so will enable us maintain a strategic advantage in the development of new AI tools for security and defense, while ensuring that those tools reflect our values and priorities as a nation. Allen Badeau is Vice President and CTO of NCI Information Systems, Inc. and the Director of the NCI Center for Rapid Engagement and Agile Technology Exchange (NCI CREATE). NCI is a leading provider of enterprise solutions and services to U.S. defense, intelligence, health and civilian government agencies.
<urn:uuid:3a8cb56d-dbd2-4357-8f57-44d6e7e44295>
CC-MAIN-2022-40
https://www.nciinc.com/post/pairing-humans-and-ai-for-enhanced-national-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00089.warc.gz
en
0.930632
1,666
2.609375
3
Can labels help improve the cybersecurity of consumer software? Published on April 08, 2022 Poorly designed and secured software products can pose a security risk to users. Vulnerability in software can be exploited by threat actors, leading to theft of users’ personal data or financial information. Vulnerabilities in consumer software usually remain unknown for a long time unless they become evident in a particular context or usage case. This makes it difficult for consumers to timely identify and properly manage vulnerabilities. “While consumers can test the brakes on a bike before going for a ride, vulnerabilities in products and the consequences of their exploitation […] are more difficult to perceive”. End-users rarely know what’s sold to them in a software “black box”: its source code or code components cannot be checked since most consumers don’t have the technical expertise. So, how can consumers distinguish a more secure product from a less secure one? Labels are one option to address this information asymmetry on the market. Often, those who sell software have disproportionately more information about the security and reliability of software than consumers. This situation bears risks for both sides – those who produce and sell software and those who purchase it – since poorly secured software in supply chains can impact many users, even leading to safety risks that boomerang back to suppliers. Introducing voluntary labelling of products’ security is one of the regulatory practices these days aimed at helping consumers make informed purchasing decisions and, thus, driving demand for secure products. Labels can help put security into products’ features; this has been discussed in the Geneva Dialogue by governments, industry and standardization bodies. Labels are also being discussed by the OECD as an option to increase transparency and consumer awareness, and have already been implemented in some countries for consumer IoT products. In the U.S., we may see similar trends where a President’s Executive Order has already prescribed steps on IoT consumer product labeling programs. While there are certain traps of labelling schemes, overall, labels will likely continue to develop and address risks in consumer IoT or smart devices. But can labels potentially be just as effective for consumer software products? Unfortunately, the answer to this is ‘not so simply’. Why ‘not so simply’? Consumers often make their purchasing choices based on reviews. So, to ensure that a label serves as a useful tool and indeed helps raise consumer awareness, it needs to rely on product ratings based on universally-applied industry criteria. For instance, in the cybersecurity industry, independent testing organizations such as AV-TEST or AV-Comparatives conduct regular security evaluations and publicly provide the results to help consumers make informed choices before purchasing cybersecurity solutions. These organizations also apply a common methodology in security evaluations, so all products are compared using a single approach. Testing results: An example from AV-Comparatives ‘Advanced Threat Protection Test 2021’ test of consumer security software https://www.av-comparatives.org/tests/advanced-threat-protection-test-2021-consumer/. Example evaluation: AV-Test's evaluation of mobile security products for Android, November 2021 https://www.av-test.org/en/antivirus/mobile-devices/android/november-2021/. Furthermore, unlike consumer IoT devices, modern software products rely more on multiple components – including those of third parties and open-source libraries. Such a complex architecture of consumer software products represents an increased attack surface for threat actors and, making them potentially more vulnerable. That’s why software manufacturers should continuously implement cyber-risk monitoring, including third-party risk monitoring. This multi-component nature of modern consumer software products poses a challenge to traditional labeling schemes – labels can hardly be expected to keep up with the pace of software’s lifecycle. The solution could be a properly developed set of baseline criteria so that, first and foremost, viable and critical characteristics build the foundation for security assessments for ensuring if software products are sufficiently secure or not. In sum, could labels potentially be equally effective for consumer software products as consumer IoT or smart devices? We’d probably answer ‘yes’ when labels rely on software products’ ratings which are (i) publicly available to consumers, (ii) based on security evaluations of third-party independent organizations, and (iii) based on a universally applied set of baseline security criteria. What could baseline security criteria include? The U.S. NIST has proposed draft baseline criteria for consumer software cybersecurity labeling and developed four categories of attestations under which consumers can have more information about secure software development, data protection practices and more. Last December NIST opened a call for submissions and published those received from Google, Kaspersky, Microsoft, and others. We recommend leaders and decision-makers check these criteria and follow the upcoming updates from NIST, as the criteria provides a constructive, risk-based approach. At the same time, some improvements could be considered. First, ‘level and speed’: to be feasible and realistic, as well as serve the intended security purposes, labels need to rely on criteria of the same nature, level and with similar updating speed. Practically speaking, the criterion ‘free from known vulnerabilities’ should be updated regularly, while a criterion indicating that ‘a secure development process’ has been implemented can be updated less often and does not require the same frequency of security attestations. Otherwise, software manufacturers would be forced to update labels too often and bear additional costs. Second, static criteria for dynamic consumer software products would hardly work, and thus would unlikely be helpful to consumers themselves. For instance, the attestation of software that is ‘free of known vulnerabilities’ might become obsolete the day after the label is issued. New information about known significant vulnerabilities appears every day, and this means that every day there is a risk of new vulnerabilities that may be significant or may affect a particular software product. Instead, a label’s focus on whether a software manufacturer implements appropriate, relevant security controls and processes would be more pragmatic and realistic. A way forward? Overall, labels or labeling schemes for consumer software could be a working solution to increase security in purchasing software products. And it is great to see the NIST follows some industry feedback in the finalized ‘Recommended Criteria for Cybersecurity Labeling of Consumer Software’ (February 2022), and particularly replaces the above-discussed criterion with: ‘Practices Secure Design and Vulnerability Remediation’ and ‘Practices Responsible Vulnerability Reporting Disclosure’. We hope to see further developments in the industry and regulatory practices to agree on baseline criteria and mutually recognized labels to also equally ensure that software manufacturers remain incentivized to pass additional security attestations. For now, within the Geneva Dialogue, industry partners continue to discuss necessary steps to ensure the security of digital products and reduce vulnerabilities in them. Their efforts will achieve the ultimate goal of greater security and safety for users. Quite interesting as well that the NIST adds an additional descriptive claim on ‘Secure Update Method’ and re-phrases a former one ‘Software End of Support Date’ to ‘Minimum Durations of Security Update Support’.
<urn:uuid:86280e82-2817-4098-b6e6-a33aba3ba485>
CC-MAIN-2022-40
https://www.kaspersky.com/about/policy-blog/general-cybersecurity/can-labels-help-improve-the-cybersecurity-of-consumer-software
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00089.warc.gz
en
0.928068
1,513
2.65625
3
How many sites and services do you keep accounts with? Between shopping, banking, apps, work and social media, do you have 20 accounts? Or perhaps more than 100? Tap or click here for 10 tips to secure your accounts with strong passwords. We hope you’re not using the same password for all your accounts. This is one of the biggest mistakes you can make regarding online security. If a hacker discovers your login credentials, they can use them to try logging into all types of accounts until they find one that grants them access. If you haven’t taken it seriously already, here’s a significant risk of using the same passwords. A report from identity management service Okta reveals that 34% of overall login attempts result from credential stuffing or hackers using stolen credentials to force their way into multiple accounts. The company found over 10 billion examples of credential stuffing on its platform in the first 90 days of 2022. Hackers get your credentials from volumes of stolen or breached logins and use them to target online platforms with multiple login attempts. This strains the servers, impacting everyone who uses the site. Once inside an account, the hacker looks for credit card numbers, Social Security numbers, financial information and other valuable data. They can use this information against you or hold it for ransom. How to protect your accounts from these attacks Take some time out of your day to check off each step below. You’ll be saving yourself from a lot of trouble in the future: - Use strong, unique passwords — Tap or click here for an easy way to follow this step with password managers. - Safeguard your information — Never give out personal data if you don’t know the sender of a text or email or can’t verify their identity. Criminals only need your name, email address and telephone number to rip you off. - Always use 2FA — Use two-factor authentication (2FA) for better security whenever available. Tap or click here for details on 2FA. - Check haveibeenpwned.com — Enter your email address into this online database to reveal which data breaches you might be involved in. - Avoid links and attachments — Don’t click on links or attachments you receive in unsolicited emails. They could be malicious, infect your device with malware and/or steal sensitive information. - Beware of phishing emails — Scammers piggyback on breaches by sending malicious emails to trick you into clicking their links that supposedly have important information. Look out for strange URLs, return addresses and spelling/grammar errors. - Antivirus is vital — Always have a trusted antivirus program updated and running on all your devices. We recommend our sponsor, TotalAV. Right now, get an annual plan with TotalAV for only $19 at ProtectWithKim.com. That’s over 85% off the regular price!
<urn:uuid:90cc9cd1-14ed-4da6-bc8b-0dd68ceb21fa>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/never-reuse-passwords/857580/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00089.warc.gz
en
0.891414
602
2.609375
3
Anyone who has ever conducted an online search knows, personal information is often easily accessible on the internet. It seems like there are no restrictions when it comes to finding personal information about people online. In response to these concerns, Google expanded its options for protecting personal information. Personal Identifiable Information, or PII, is private information used to prove you are who you say you are. Keeping this information secure is important for individuals and organizations alike; especially if the organization manages customers’ PII. The Risk of Identity Theft The risk of identity theft is extremely high when PII is not protected. Identity thieves victimize companies and their customers due to exposed personal information, making this a grave concern for everyone. In 2020, identity theft was the result of 65% of data breaches. People with active social media accounts are 30% more likely to become victims of identity theft. If you have Snapchat, Facebook, or Instagram, you’re 46% more likely to become a victim. Google’s New Policy Users can now request personal information (phone numbers, email addresses, and physical addresses) be removed from search results. Social security numbers and banking information should be removed to avoid identity theft too. Google is helping to protect users’ privacy and safety in the digital age by allowing them greater control over their personal information online. Google’s new policy gives you the tools you need to stay safe and secure online. Having personal contact information publicly available online can pose a significant threat to one’s personal safety and security. For this reason, 86% of people choose to remove such personal details from their social media profiles and other online platforms. Google reviews the information carefully before they decide to remove it or let it stay. Any information that is vital in preserving historical records or enabling communication among members of a community will not be removed. Google’s goal is to balance protecting personal privacy and making sure valuable information remains accessible online. Google’s Requirements for Removing Personal Information For Google to consider the content for removal, it must pertain to the following types of information: - Confidential government identification (ID) numbers like U.S. Social Security Number - Bank account numbers - Credit card numbers - Images of handwritten signatures - Images of ID docs - Highly personal, restricted, and official records, such as medical records - Personal contact info (physical addresses, phone numbers, and email addresses) - Confidential login credentials More Ways to Protect Your Information It is important to remember that even if you remove information from search results, it will still be on the internet. Removing information from search results just makes it more difficult for someone to find. If you want to remove personal information from a website, you will need to reach out to the website’s host. Download our Free Resource Restricting you share on social media platforms and information you provide to other websites can better ensure your PII (Personal Identifiable Information) is secure.
<urn:uuid:fb372c75-4da2-406e-98cc-7cfbd66cc0d7>
CC-MAIN-2022-40
https://www.ctgmanagedit.com/remove-personal-information-from-google/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00089.warc.gz
en
0.90481
615
2.640625
3
File transfer protocol (or FTP for short) is a protocol that allows computers to transfer files to a server over a network. Store host information, username, password, ports, types, mode in Hypervault. Different settings of FTP-connections FTP, FTPS, FTPES Active / passive / standard - Active mode: the client connects to a random port for incoming data connections from the server - Passive mode: the FTP server waits for the FTP client to send a port and IP address to connect to Upload exported XML-files from, for example, Filezilla directly under the right client in the vault.
<urn:uuid:6c2c30cc-b8db-4652-8bc5-5d5c53a28458>
CC-MAIN-2022-40
https://hypervault.com/data_templates/ftp-data-access-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00089.warc.gz
en
0.811841
135
2.578125
3
Supply chain planning (SCP) is a process that helps businesses ensure that they have the right amount of goods and services to meet customer demand. It involves forecasting future demand and then organizing production and inventory accordingly. In this blog post, we will discuss the definition of SCP and the different types and processes involved in supply chain planning. What is supply chain planning? It is part of supply chain management (SCM). The definition of supply chains planning can be summed up as ensuring a business has the right amount of goods and services to meet the market demand. It involves forecasting future demand and then organizing production and inventory accordingly. There are various types of SCP, each with its own set of processes. Types of SCP There are three types of supply chain planning: Short term planning This type is used to plan and manage production for weeks or months. It involves forecasting demand, organizing production, and managing inventory. Long term planning This type is used to plan and manage production for the next few years. Integrated supply chain management This type is a comprehensive approach that integrates short- and long-term planning into a single process. Supply chain planning strategies There are a few key strategies that can help make SCP more effective. 1. Having accurate and up-to-date data is essential for making informed decisions. This means having a system to track inventory levels, sales trends, and supplier performance. 2. Planning is vital for ensuring enough stock to meet market demand while avoiding overstocking, leading to waste and lost profits. 3. Regularly reviewing and adjusting plans based on changing conditions is critical for ensuring that the supply chain runs smoothly. Factors that can impact plan adjustments include changes in customer needs, supplier reliability, and market conditions. By following these key supply chain strategies, supply chain planners can develop more effective plans that will help improve the supply chain’s overall efficiency. Supply chain planning process There are five main processes involved in supply chain planning: It is part of the demand management process. The demand planning process involves predicting future demand for goods and services. This forecasting is done by analyzing past sales data and trends in market demand. The output of this process is a demand plan. Capacity planning determines how much supply a company has to meet forecasted demand. It includes organizing production, setting up suppliers, and managing inventory. Production planning determines what products will be made and when they will be made. This includes setting up schedules, ordering materials, and preparing production lines. Sales and operations planning (S&OP) Sales and operations planning (S&OP) brings sales and operations together to make decisions affecting both departments. The goal of S&OP is to ensure that the two departments are working together to meet consumer demand. Therefore, sales and operations planning should be done regularly, preferably monthly. The process of sales and operations planning usually involves the following steps: - Collecting data - Analyzing data - Developing plans - Communicating plans - Tracking results This process manages the stock level to ensure enough supply to meet consumer needs. It includes setting reorder points, ordering stock, and tracking inventory levels. The other tool part of supply chain planning is distribution requirement planning. Five Steps to Supply Planning and S&OP Success 1. A supply plan is created by forecasting future demand and organizing production and inventory to meet that demand. 2. Forecasting future demand – Forecasting future requests is made by analyzing past sales data and trends in customer demand. Various forecasting methods, including trend analysis, regression analysis, and causal models, can be used. 3. Determining inventory levels – The inventory levels are determined by calculating how much supply is needed to meet forecasted demand. This process includes setting reorder points, ordering stock, and tracking inventory levels. 4. Planning production – Planning production by setting up schedules, ordering materials, and preparing production lines. S&OP integrates short-term and long-term supply chain planning into a single process. 5. Adjusting plans as needed – The supply plan and inventory levels are adjusted to meet changes in demand. This can involve modifying production schedules, ordering more or less stock, and adjusting inventory levels. These five steps work together to help a company keep its supply chain running smoothly and meeting customer demands. Flows in the supply chain - The flow of products and materials between suppliers and manufacturers - The flow of products and materials between manufacturers and distributors - The flow of products and materials between distributors and retailers - The flow of information throughout the supply chain Supply chain planning examples One standard supply chain planning example is for a company that sells products online. In this case, the SCP process would involve forecasting how the company will sell many products in the next month. Then you need to figure out how many resources (such as employees, products, and shipping supplies) you need to meet that demand. And then arrange for the production and delivery of those resources. Another example would be a company that needs to replenish its stock of products. The company would need to forecast how much product it will need and then order that amount from its suppliers. The company needs to plan for changes, like increasing demand for a product. This could require the company to place a larger order from its suppliers, or it could mean that it would need to find a new supplier if its current supplier can’t meet the increased demand. Benefits of supply chain planning Mainly they benefit production operations substantially. Following are other benefits of supply chain planning: Planning helps companies produce the right products in the right quantities, increasing profits. Improved customer service Supply chain planning helps businesses meet customer orders quickly and efficiently by forecasting demand and organizing production. Reduced inventory costs Properly managed inventory can help reduce stock outages and excess inventory costs. Improved supply chain coordination When all parts of the supply chain work together, it leads to a more efficient and coordinated supply chain. This can lead to increased profits and improved customer service. Less waste and fewer delays By forecasting demand, companies can plan production, which helps avoid wasted time and resources due to last-minute changes in the market. This also helps eliminate or reduce delays when different parts of the supply chain are not coordinated. How to get started with supply chain planning If you’re interested in getting started with SCP, here are a few tips: Talk to your suppliers Your suppliers can be a valuable resource when it comes to capacity planning. First, ask them about their current production capabilities and plans. Use historical data Historical data can help forecast future demand. Use past sales, surveys, and market research data to help you plan demand. Use software tools Software tools can automate the supply chain planning process and make it easier to manage. Various supply chain management software options are available, so find one that fits your needs. SCP is critical for any business to meet customer demand while maximizing profitability. What are the tools used for SCP? Companies can use various management tools and techniques for effective supply chain planning. However, some of the critical supply chain drivers are: - Sales and Operations Planning (S&OP) – S&OP is a process that helps companies to align supply and demand. It involves forecasting future sales, organizing production, and managing inventory. - Master Production Schedule (MPS) – An MPS is a schedule showing each product’s planned production. It can plan production, track inventory levels, and set supply chain deadlines. - Material Requirements Planning (MRP) – MRP is a process that helps companies order the correct amount of materials to meet production demands. It uses past sales data and current inventory levels to calculate how much material needs to be ordered. - Just-in-Time (JIT) – JIT is a supply chain strategy that aims to reduce inventory levels and improve delivery times. It relies on suppliers to deliver materials just in time for production. - Capacity Requirements Planning (CRP) – CRP is a process that helps companies to plan for future production needs. It uses historical data and current capacity levels to forecast future production demands. - Inventory Management – Inventory management is tracking and managing inventory levels. This can include tasks such as ordering, stocking, and shipping inventory. - Transportation Management – Transportation management is organizing and coordinating transportation resources. This can include selecting carriers, arranging shipments, and tracking freight. Who does the supply chain planning in an enterprise? In most organizations, supply chain planning is a centralized function carried out by supply chain planners. The supply chain planners are responsible for developing and executing the supply chain strategy, ensuring that the supply chain can meet the demands of the business. Supply chain professionals use various tools and processes to carry out their work. Some of the most common tools include supply chain modeling, inventory management, and forecasting. The new trend in supply chain planning The new trend in supply chain planning is moving away from traditional, siloed planning processes and towards collaborative, end-to-end supply chains. This involves breaking down the barriers between different parts of the supply chain and working together to plan and execute supply chain operations. Another trend is the use of big data and analytics. With so much data available, businesses can use analytics to develop predictive models that help them forecast demand more accurately. This allows them to better plan their production and inventory levels, which leads to increased profitability and improved customer service. The future of supply chain planning The future of supply chain planning will likely include more automation and artificial intelligence (AI). AI can help planners optimize their supply chains by predicting demand, identifying bottlenecks, and recommending solutions. There are several ways that businesses can use AI in SCP, including the following: Companies can use AI to predict future demand based on past data. Demand planning can help companies to plan their production and inventory levels better. Supply network design AI can help companies optimize their supply networks, ensuring that they have the right amount of suppliers and inventory at suitable locations. AI can help companies optimize their order processing, reducing time and money spent on processing orders. Overall, AI can play a vital role in improving supply chain execution and ensuring that companies meet customer satisfaction. In addition, companies can improve their efficiency and competitiveness in the market by using AI. Another potential trend in supply chain planning is the use of blockchain technology. For example, companies could use blockchain to create a distributed ledger of transactions they can share between supply chain partners. This would help to ensure transparency and trust among supply chain partners. It is still early for blockchain in supply chain planning, but it holds great potential for the future. As businesses become more familiar with the technology and its benefits, we expect to see more blockchain implementations in supply chain processes. Big data and analytics The use of big data and analytics is a trend that is likely to continue in supply chain planning. With so much data available, businesses can use analytics to develop predictive models that help them forecast demand more accurately. This allows them to achieve integrated business planning, increase profitability, and improve customer service. In supply chain planning, you should consider three key things: minimizing stock-outs and other supply disruptions, ensuring your suppliers are reliable and cost-effective and meeting customer demand. This article details supply chain planning and the tools that can help you do it more effectively.
<urn:uuid:4d4d386b-39b1-455e-890c-34ce4504941c>
CC-MAIN-2022-40
https://www.erp-information.com/supply-chain-planning-scp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00089.warc.gz
en
0.931877
2,373
3.53125
4
<syslog_output> <server>192.168.1.1</server> </syslog_output> Many people are familiar with the above commands, which are used for forwarding syslog to the syslog server (192.168.1.1). However, do you know how syslog can be helpful? Read on to find out how and see some different use cases for enabling syslog in your infrastructure. The primary use of syslog is for systems management. Proactive syslog monitoring pays off because it can significantly reduce downtime of servers and other devices in your infrastructure. Additionally, you should receive cost savings from preventing the loss of productivity that usually accompanies reactive troubleshooting. Alerting is another good use of syslog. You have a variety of options and severity levels that you can choose in setting up syslog alerts, including emergency, critical, warning, error, and so on. Also, alerts have fine points like host details, time period, and log/message details. There are a number of different areas where syslog alerting is useful. - Network alerting: Syslog is extremely helpful in identifying critical network issues. For example, it can detect fabric channel errors on a switch fabric module. This is one of many such warnings or errors that other forms of monitoring metrics cannot detect. - Security alerting: Syslog messages provide detailed context of security events. Security admins can use syslog to recognize communication relationships, timing, and in some cases, an attacker’s motive and tools. - Server alerting: Syslog can alert on server startups, clean server shutdowns, abrupt server shutdowns, configuration reloads and failures, runtime configuration impact, resource impact, and so on. All these alerts can help detect if the servers are alive. Syslog also helps detect failed connections. Server alerts are always useful, especially when you oversee hundreds of servers. - Application alerting: You need application alerting for troubleshooting live issues. Applications create logs in different ways, some through syslog. When you run a web application, dozens of logs are written in the log folder. To get real-time monitoring, you need a syslog monitoring solution that can observe changes in the log folder. Monitoring high-availability (HA) servers is important and another good use of syslog. However, not all the logs from the HA server are important. You just need to monitor the logs that are troublesome. However, in case of a HA server failure, you still need all the logs from the server. The solution for this is to have a dedicated syslog server for your HA cluster. Despite the importance of proactive monitoring, some logs can only be analyzed later. Sometimes an alert or an error sends only basic details that are located in the local memory buffer. For detailed analysis, you need to dig into the historical syslog reports using any syslog analysis tool, like LogZilla®, Kiwi Syslog®, or syslog-ng™. Historical syslog data can often provide comprehensive details, like configuration changes, high momentary error rates, or a sustained abnormal condition, that cannot be shown using other forms of monitoring. Proactive syslog monitoring and troubleshooting reduces trouble tickets because you detect and resolve issues before they even become trouble tickets. A synchronous web dashboard, alerting system, and log storage (with search options) are the basic features of any syslog monitoring tool. Moreover, integrating the syslog monitoring tool with other infrastructure management tools adds value to your syslog monitoring. Try a Syslog Server Solution Today Kiwi Syslog® Server is a multi-purpose tool for IT admins, allowing them to collect, filter, alert, forward, react to, archive, and store logs. Whether you collect syslog messages, SNMP traps, or Windows® event log messages, Kiwi Syslog Server is a great utility to set up automated log storage and clean-up schedules. Don’t believe it? Give it a try!
<urn:uuid:ffc0a3eb-a1d6-4793-bdfd-acbef381025a>
CC-MAIN-2022-40
https://logicalread.com/is-syslog-useful/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00089.warc.gz
en
0.88703
827
2.578125
3
The laws of cyber-warfare are being rewritten in Europe. The Russo-Ukrainian War is not limited to the hot conflict at fire zones of the front. It is possible to hear the echoes of war in the cyber world too. In our digital world, data is one of the most valuable assets. Every nation has its own strengths and weaknesses, but those who are able to control and process data go one step further than others. Cyber wars are not only limited to what we read in the newspapers. The consequences of an attack on a data center can be life-threatening because people may not be able to access vital services. Nowadays, cybersecurity risks even threaten the healthcare industry. War is bad wherever it happens, but the conflict between Russia and Ukraine occurs in a very important geographical location. Disruptions in communication in this region, which is the center of grain and energy production, will affect Europe and the whole world. And it already does. For this reason, the Russo-Ukrainian War is also a turning point for cyber wars, both due to the current state of technological development and geopolitical reasons. Increasing cyber threats in Ukraine The SSSCIP reported an increase in cyberattacks in the second quarter of this year earlier this month. The national telecommunications company of the nation, Ukrtelecom, was the target of a cyberattack in April that used hacked employee credentials. Since Russia’s invasion, cyberattacks have been on the rise, but a spike was observed in the second quarter of 2022, when 19 billion events were processed by Ukraine’s national Vulnerability Detection and Cyber Incidents/Cyber Attacks System, according to the cyber agency. The number of cyber events that were reported and investigated rose from 40 to 64. The number of occurrences in the “malicious code” category increased by 38% compared to the first quarter of the year, indicating a “significant increase” in malicious hacker group activities in spreading malware. “The main goal of hackers remains cyber-espionage, disruption of the availability of state information services and even destruction of information systems with the help of wipers,” the SSSCIP stated. Compared to the first quarter, the number of important events coming from Russian IP addresses declined by 8.5 times. This has been made possible by security measures set up by electronic communication networks and internet access providers that, according to the SSSCIP, restrict IP addresses used by the Russian Federation. Since IP addresses may be manipulated, the USA currently has the highest number of occurrences from source IP addresses, but this does not necessarily indicate that the country is the source of attacks. High-voltage electrical substations in Ukraine were attacked using a new variation of the Industroyer virus in July, which was connected to a 2016 attack on Ukraine by the Russian Sandworm gang. Additionally, CISA has taken additional measures to safeguard US-based enterprises. In February, it started a campaign called Shields Up to alert domestic institutions to get ready for potential Russian cyber attacks. US and Ukraine collaborate on cybersecurity measures The US and Ukraine cybersecurity agencies have inked a contract for closer collaboration on cybersecurity. A Memorandum of Cooperation (MoC) between the US Cybersecurity and Infrastructure Security Agency (CISA) and the Ukrainian State Service of Special Communications and Information Protection of Ukraine (SSSCIP) was signed, and it was revealed yesterday. According to CISA, it will strengthen the current connection between the two entities. According to the agreement, the agencies will share knowledge and best practices about cyber events. Oleksandr Potii, the SSSCIP’s vice chairman, said they will also communicate in real-time technical details on critical infrastructure security. The MoC also permits the two organizations to do joint training exercises. “Cyber threats cross borders and oceans. So we look forward to building on our existing relationship with SSSCIP to share information and collectively build global resilience against cyber threats,” said CISA Director Jen Easterly, calling out Russia for “cyber aggression” in what she said was an unprovoked war. Europe plays its own part with CRRTs Not just the US government assists Ukraine in fending off Russian cyber-aggression. The EU also sent a Cyber Rapid Response Team (CRRT) to assist the nation in February. Lithuania served as the team’s leader, and Croatia, Poland, Estonia, Romania, and the Netherlands assisted. “Cyber Rapid Response Teams (CRRTs) will allow the member states to help each other to ensure a higher level of cyber resilience and collectively respond to cyber incidents. CRRTs could be used to assist other member states, EU Institutions, CSDP operations as well as partners. CRRTs will be equipped with a commonly developed deployable cyber toolkits designed to detect, recognise and mitigate cyber threats. Teams would be able to assist with training, vulnerability assessments and other requested support. Cyber Rapid Response Teams would operate by pooling participating member states experts,” the manifestation of Permanent Structured Cooperation (PESCO) stated. Cyber conflicts and wars might target military-related objectives, but their effects are felt most by the civilians on both sides of the conflict. The fact that the hot spot of the war is Europe once again, the cybersecurity threats push the countries to take serious measures.
<urn:uuid:d3bc10b7-ea02-4f87-99cc-7cd6a3b15ba4>
CC-MAIN-2022-40
https://dataconomy.com/2022/08/the-laws-of-cyber-warfare-being-rewritten/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00089.warc.gz
en
0.953881
1,115
2.703125
3
Vulnerability scanning is the act of scanning applications, systems, devices or networks for potential security weaknesses. These weaknesses or vulnerabilities in software and systems are often exploited by cyber criminals to breach the networks of organizations and to launch attacks. Based on data collected by SecurityMetrics Forensic Investigators from last year’s breaches, it took an average of 166 days from the time an organization was vulnerable for an attacker to compromise the system. Once compromised, attackers had access to sensitive data for an average of 127 days. Generally automated through tools, vulnerability scanning helps detect and classify weaknesses in an organization’s network and systems. These could be security vulnerabilities such as cross-site scripting, SQL Injection or insecure server configuration. Vulnerability scanning often looks for targets such as IP addresses and scans for known vulnerabilities and misconfigurations, and audits IP address ranges to detect for redundant usage of IP addresses or if unauthorized services are being exposed. By detecting these vulnerabilities and implementing proper countermeasures, you will be able to reduce the attack surface that cybercriminals could exploit. Why are vulnerability scans important? It is impossible for an organization to have a fully secure network and for all its applications to be devoid of vulnerabilities forever. This is especially true considering the discovery of more and more vulnerabilities, software updates, patches and increasingly sophisticated forms of cyber attacks. Even malicious actors are constantly evolving their tools using automation, bots and advanced techniques to be able to exploit vulnerabilities. These attack tools methods are also becoming cheaper, easier and more accessible to criminals around the world. We are also seeing more delays in the discovery of breaches. A FireEye report from 2020 showed the global median dwell time from the start of a breach to the point of its identification to be 56 days. How does vulnerability scanning work? Vulnerability scanners basically operate based on several “if-then” scenarios and can take up to 3 hours to complete a scan. These scenarios check for various system settings that could lead to exploitation, such as an outdated operating system or an unpatched software version. A vulnerability scanner runs from the outside – from the end point of the individual that is inspecting a particular attack surface. These tools can catalog all the systems in a network in an inventory, identify the attributes of each device including the operating system, software, ports and user accounts among others. The scanner then checks each item in the inventory to a database of known vulnerabilities including security weaknesses in services and ports, anomalies in packet construction, and potential paths to exploitable programs or scripts. The scanner software attempts to exploit each vulnerability that is discovered and flags up those that need further action. The scan can be conducted either through an authenticated or unauthenticated approach. The unauthenticated approach mimics how a criminal would attempt to breach without logging into the network, while the authenticated approach involves a tester logging in as a real user and shows vulnerabilities that could be exposed to someone who managed to breach and pose as a trusted user. Penetration testing vs vulnerability scanning It is important to distinguish vulnerability scanning from penetration testing. Vulnerability scanning is a more automated high-level scan and looks for potential security holes whereas a penetration test is more exhaustive, involving a live examination of the network to try and exploit any and all weaknesses. Moreover, vulnerability scans only identify the vulnerabilities while a penetration test will go deeper to identify the root cause of the issue and even business logic vulnerabilities that an automated tool can skip over. Benefits of vulnerability scanning In an age where cyber attacks are on the rise, and the tools used to exploit security weaknesses in enterprises are becoming more advanced, vulnerability scanning helps organizations stay ahead of the curve. Vulnerability scanning provides numerous benefits as follows: Identifying vulnerabilities before they can be exploited Vulnerability scanning is a way for organizations to discover weaknesses and fix them before criminals get a chance to take advantage. Automating repeatable process With most vulnerability scanning tools, you only have to configure once. After that it runs as a repeatable process on a regular basis and can provide monitoring reports on an ongoing basis. Assessing overall security health of your systems By identifying all the potential security vulnerabilities, it is also a way to ascertain the overall effectiveness of security measures in your network. Too many flaws or holes can be a sign that it is time for a revamp of your security infrastructure. Preventing losses from data breaches Identifying and plugging holes in the security can help organizations avoid significant financial losses that may otherwise have resulted from data breaches. Regular vulnerability scans may also be used to receive pay-outs from cyber insurance plans. Meeting data protection requirements Vulnerability scanning can also go a long way in avoiding fines that may result from loss of customers’ personal data and in meeting regulatory requirements. For example, the international standard for information security, ISO 27001, and the PCI DSS (Payment Card Industry Data Security Standard) are standards which mandate organizations to take key steps in detecting vulnerabilities to protect personal data. CDNetworks offers a Vulnerability Scanning Service that can detect weaknesses in systems and applications to safeguard against breaches and attacks. It is a cloud-based solution that can generate reports detailing the state of application, host, and web security, along with recommended solutions to remedy known security vulnerabilities. In addition, 씨디네트웍스 Application Shield can protect your applications against vulnerabilities, including the dreaded Zero Day vulnerabilities, by sending the “efficient patch” web application firewall (WAF) rules to the entire platform synchronously.
<urn:uuid:34ad0ca7-58be-4b38-a314-8a0d8538328c>
CC-MAIN-2022-40
https://www.cdnetworks.com/ko/cloud-security-blog/vulnerability-scanning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00089.warc.gz
en
0.939647
1,137
3.0625
3
Video on Demand or VoD Streaming, as the term suggests is a way of streaming video content from a platform as and when the viewer wants to. It essentially puts power back in the hands of the viewers instead of the broadcasters, as used to be the case in the early days of media and even during the first forms of online streaming. When you click a YouTube video embedded on a website or watch a movie on Netflix you are experiencing VoD streaming. As long as the video is hosted on these platforms and you can access them, all you have to do is click play. It is no surprise that more consumers are turning to VoD streaming for their video consumption needs. According to Statista, the number of users in the VoD segment of the market is expected to rise to almost 3 million by 2026. How Does VoD Work? VoD streaming falls under the category of over-the-top (OTT) streaming, which encompasses the streaming of all types of content – video, audio and VoIP calling – over the internet. Unlike traditional media broadcast and content distribution, this type of streaming circumvents cable and telecom networks and goes directly from the content creator or distributor directly to the end user or viewer. Providers of VoD first organize their content into video libraries. They then open up access to these libraries that viewers can make use of in different ways based on the model chosen by the business. The viewer chooses the video content they want and clicks play. In the case of VoD, the source of the video content are static files (as opposed to a feed from a camera in the case of live streaming). The video files are broken up into smaller pieces according to standardized rules and methods called video streaming protocols. These pieces are delivered to the end user for reassembling and viewing. Viewers can watch the VoD content from any location at any time. To ensure that the viewing experience is consistent for consumers distributed across the world, sometimes businesses also use a Content Delivery Network (CDN) for video streaming. This is a network of servers and data centers spread around the globe that brings the content closer to the end users. VoD vs Live Streaming, What’s the Difference? The main difference between VoD and Live Streaming comes down to how the content to be streamed is accessed. In VoD streaming, the content is accessed from libraries of previously recorded and stored files, while in live streaming, the content is accessed and consumed in real time. To use a very simple example, the live broadcast of a major sporting event, say the UEFA Champions League final, can be live streamed by authorized sources, such as the Facebook page of the host country’s top sports channel. Now the highlights of the same match, or even the full replay, can be recorded and saved as a video file on their YouTube channel for fans to view later if they missed the live event. Note that both VoD and Live Streaming are forms of video streaming and the viewer does not have to download the original files. Both operate based on how video streaming works – by breaking down the video file into small packets of data, each of which will be interpreted and played on the viewers device on the go. What are the Advantages of Video on Demand Streaming? With more consumers taking to streaming as a means of viewing their content, businesses also have new opportunities to capture their attention and grow an audience that can be monetized eventually. The choice of platforms, formats and monetization models comes down to the creators’ preferences and individual cases but there are some clear advantages to choosing VoD streaming. From a consumer’s point of view, the biggest advantage of VoD Streaming is the convenience it provides them. They can view the content as and when they like, from the comfort of their homes. Similarly, creators and businesses can also take advantage of VoD Streaming to keep viewers engaged with their content even if they cannot catch the live streams. Wide Variety of Content Closely tied to the benefit of convenience is the wide variety of content that users can access on demand. For example, if they are unable to catch a live stream or not completely satisfied with how the live stream was covered, they can explore different versions of replays and highlights of the same event, on demand if they are available from other sources. On the other hand, creators benefit from having the freedom to exercise their creativity and vision in their videos without having to comply with advertisers’ guidelines or restrictions imposed by the real-time nature of live streaming. For example, if a creator wants to add their own opinions and views to specific sections of the match footage, they can insert them, add graphics and make other edits before releasing the video for streaming on demand. Cost – Lower and Flexible Subscription Fees VoD streaming services also cost less for the consumer, in comparison to traditional methods of video consumption such as cable. It also does away with the added costs of equipment rental and other hidden fees. Cable subscription can cost users up to over $2000 a year, more than 10 times the yearly cost of subscription plans associated with VoD platforms. Most plans also come with the flexibility of monthly payments and the option to cancel the subscription any time. Instead of purchasing entire bundles which may also include programming that viewers don’t need, VoD streaming allows them more control to pick and choose what they want from a catalog. This also benefits businesses indirectly since they can rely on recurring revenue from viewers who genuinely want to engage with their content. Availability across Different Devices Streaming video content on demand also gives viewers the convenience of watching across their many devices. As long as they have an internet connection, they can access the video on their laptops, phones, tablets and even smart TVs. Some VoD streaming services like Netflix also come with the option of directly broadcasting to a smart TV through a browser. The Different Monetization Models for Video on Demand Streaming Businesses producing video for streaming on demand do have to monetize their content at the end of the day. The very nature of VoD streaming opens up a few different models on how they can accomplish this. Subscription Video on Demand (SVoD) Perhaps the most common form of monetization, Subscription Video on Demand allows creators to be paid by their viewers on a recurring basis. This could be through a weekly, monthly or annual fee which allows viewers access to the creators’ video libraries and watch videos at their convenience, with no ads or pop ups to interfere with their experience. The big streaming giants Netflix, Amazon Prime, Disney+ and Hulu and others of a similar nature operate on this SVoD model. This model of monetization is usually adopted by businesses with large video libraries. Ad-Based Video on Demand (AVoD) Another way that creators can monetize their VoD content is through advertising. This model works on the basis of allowing viewers to stream the video content for free but at the cost of having to watch ads that are allowed by the creator. The ads are what earn revenue for the business, and these can appear at the beginning of the video (Pre-roll), in the middle (Mid-roll) or at the very end (Post-roll). Since the revenue is usually proportional to the number of times the ads are seen or clicked, this type of model is usually adopted by creators with a large viewership or audience base. Some common video platforms that allow the AVoD model of monetization are YouTube and Daily Motion. Transaction Video on Demand (TVoD) A third model for monetizing VoD streaming is through a per-view basis. In this model, customers pay to access individual content assets in one-time transactions instead of the entire library, like checking out items off a shopping cart. For example, Amazon Prime offers the option of purchasing individual movies, episodes of TV shows or specific seasons. Think of it as renting a much anticipated movie like in the 90s but digitally. The TVoD model is best suited for when the videos are highly popular but the whole library is not too big. The access to the content through the TVoD model could be permanent or DTR (Download to Rent) in which case the content will be available for a specific period of time for a smaller fee. Other platforms that use the TVoD model are Google Play and iTunes. For businesses intending to take advantage of the popularity of VoD streaming and the benefits it provides, CDNetworks offers a Media Delivery solution that delivers high quality VoD content anywhere and on any device, along with live broadcast. It pulls content directly from the customer’s origin server or CDNetworks Cloud Storage and provides a host of benefits including bandwidth scalability, reduced traffic congestion and cost without compromising video stream performance. To explore these benefits and plenty more for your VoD needs, choose a free trial of Media Acceleration – VoD today.
<urn:uuid:09f16704-da91-4e8d-84b6-c56d0f24edd9>
CC-MAIN-2022-40
https://www.cdnetworks.com/media-delivery-blog/what-is-video-on-demand-streaming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00089.warc.gz
en
0.94357
1,836
2.6875
3
When most people think about fiber optic cables, they often think of the Internet. Sure, these cables are essential for supporting high-speed, 5G Internet connections, but fiber optic cables are also a necessary part of several other industries that require the stable, lightning-quick abilities of fiber optic cables. The everyday operations of the government, data storage, telecommunications, and medical industries all benefit significantly from the use of fiber optic cables. Learn how multi-purpose fiber optic cables support these industries. Fiber optic cables support the telecommunications industry by supplying companies with their high-speed, high-bandwidth, low-loss connections. Recent global events called for a dramatic, unanticipated, and widespread increase in bandwidth demands. Employees working from home and video conferencing to stay connected could not have accomplished this without well-established fiber infrastructure. Without the clear connection provided by fiber optic cables, these telecommunications would be near impossible to sustain. Fiber optic cables support the medical industry in a variety of ways. First, fiber optic cables can help power the various technologies used in hospitals to facilitate the sharing of information and emergency responses. For instance, fiber optic cables can power nurse call systems, which enable nurses to respond immediately to patients who need care. Additionally, fiber optic cables can also be used to create flexible strands that can be inserted into areas of the body–including lungs and blood vessels–to enable doctors to do an internal exam of a patient without invasive surgery. Government agencies have started using fiber optic cables for various purposes, ranging from use in sonar, aircraft, and other defense vehicles. Of course, fiber optic cables are also essential for providing a reliable Internet connection for any government agencies which rely on computers and technology for saving and protecting valuable data. The data storage industry has grown exponentially in the past decade. This may be due to the equally substantial rise in fiber optic cables, which are extremely useful for data transmission and storage. Fiber optic cables support the data storage industry by making the transmission and storage of data much more quick and efficient than it was in the past. Get in Touch with FiberPlus FiberPlus has been providing data communication solutions for 28 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include: - Structured Cabling (Fiber Optic, Copper and Coax for inside and outside plant networks) - Electronic Security Systems (Access Control & CCTV Solutions) - Wireless Access Point installations - Public Safety DAS – Emergency Call Stations - Audio/Video Services (Intercoms and Display Monitors) - Support Services - Specialty Systems - Design/Build Services FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry. Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at email@example.com, or visit our contact page. Our offices are located in the Washington, DC metro area and Richmond, VA. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999.
<urn:uuid:c5b1a96d-8449-42ea-9f1b-fe4a175616e5>
CC-MAIN-2022-40
https://www.fiberplusinc.com/services-offered/how-fiber-optic-cables-support-industries/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00290.warc.gz
en
0.923178
688
2.625
3
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9 percent of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can’t be explained can’t be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks. In the past few years, explainable artificial intelligence has become a growing field of interest. Scientists and developers are deploying deep learning algorithms in sensitive fields such as medical imaging analysis and self-driving cars. There is concern, however, about how these AI operate. Investigating the inner-workings of deep neural networks is very difficult, and their engineers often can’t determine what are the key factors that contribute to their output. For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers? Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging. Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea. What’s wrong with current explainable AI methods? Classic symbolic AI systems are based on manual rules created by developers. No matter how large and complex they grow, their developers can follow their behavior line by line and investigate errors down to the machine instruction where they occurred. In contrast, machine learning algorithms develop their behavior by comparing training examples and creating statistical models. As a result, their decision-making logic is often ambiguous even to their developers. Machine learning’s interpretability problem is both well-known and well-researched. In the past few years, it has drawn interest from esteemed academic institutions and DARPA, the research arm of the Department of Defense. Efforts in the field split into two categories in general: global explanations and local explanations. Global explanation techniques are focused on finding general interpretations of how a machine learning model works, such as which features of its input data it deems more relevant to its decisions. Local explanation techniques are focused on determining which parts of a particular input are relevant to the decision the AI model makes. For instance, they might produce saliency maps of the parts of an image that have contributed to a specific decision. All these techniques “have flaws, and there is confusion regarding how to properly interpret an interpretation,” Elton writes. Elton also challenges another popular belief about deep learning. Many scientists believe that deep neural networks extract high-level features and rules from their underlying problem domain. This means that, for instance, when you train a convolutional neural network on many labeled images, it will tune its parameters to detect various features shared between them. This is true, depending on what you mean by “features.” There’s a body of research that shows neural networks do in fact learn recurring patterns in images and other data types. At the same time, there’s plenty of evidence that deep learning algorithms do not learn the general features of their training examples, which is why they are rigidly limited to their narrow domains. “Actually, deep neural networks are ‘dumb’- any regularities that they appear to have captured internally are solely due to the data that was fed to them, rather than a self-directed ‘regularity extraction’ process,” Elton writes. Citing a paper published in the peer-reviewed scientific magazine Neuron, Elton posits that, in fact, deep neural networks “function through the interpolation of data points, rather than extrapolation.” Some research is focused on developing “interpretable” AI models to replace current black boxes. These models make their reasoning logic visible and transparent to developers. In many cases, especially in deep learning, swapping an existing model for an interpretable one results in an accuracy tradeoff. This would be a self-defeating goal because we opt for more complex models because they provide higher accuracy in the first place. “Attempts to compress deep neural networks into a simpler interpretable models with equivalent accuracy typically fail when working with complex real world data such as images or human language,” Elton notes. Your brain is a black box One of Elton’s main arguments is about adopting a different view of understanding AI decision. Most efforts focus on breaking open the “AI black box” and figuring out how it works at a very low and technical level. But when it comes to the human brain, the ultimate destination of AI research, we’ve never had such reservations. “The human brain also appears to be an overfit ‘black box’ which performs interpolation, which means that how we understand brain function also needs to change,” he writes. “If evolution settled on a model (the brain) which is uninterpretable, then we expect advanced AIs to also be of that type.” What this means is that when it comes to understanding human decision, we seldom investigate neuron activations. There’s a lot of research in neuroscience that helps us better understands the workings of the brain, but for millennia, we’ve relied on other mechanisms to interpret human behavior. “Interestingly, although the human brain is a ‘black box’, we are able to trust each other. Part of this trust comes from our ability to ‘explain’ our decision making in terms which make sense to us,” Elton writes. “Crucially, for trust to occur we must believe that a person is not being deliberately deceptive, and that their verbal explanations actually maps onto the processes used in their brain to arrive at their decisions.” One day, science might enable us to explain human decisions at the neuron activation level. But for the moment, most of us rely on understandable, verbal explanations of our decisions and the mechanisms we have to establish trust between each other. The interpretation of deep learning, however, is focused on investigating activations and parameter weights instead of high-level, understandable explanations. “As we try to accurately explain the details of how a deep neural network interpolates, we move further from what may be considered relevant to the user,” Elton writes. Self-explainable artificial intelligence Based on the trust and explanation model that exists between humans, Elton calls for “self-explaining AI” that, like a human, can explain its decision. An explainable AI yields two pieces of information: its decision and the explanation of that decision. This is an idea that has been proposed and explored before. However, what Elton proposes is self-explaining AI that still maintains its complexity (e.g., deep neural networks with many layers) and does not sacrifice its accuracy for the sake of explainability. In the paper, Elton suggests how relevant causal information can be extracted from a neural network. While the details are a bit technical, what the technique basically does is extract meaningful and present information from the neural network’s layers while avoiding spurious correlations. His method builds on current self-explaining AI systems developed by other researchers and verifies whether explanations and predictions in their neural networks correspond. In his paper, Elton also discusses the need to specify the limits of AI algorithms. Neural networks tend to provide an output value for any input they receive. Self-explainable AI models should “send an alert” when results fall “outside the model’s applicability domain,” Elton says. “Applicability domain analysis can be framed as a simple form of AI self-awareness, which is thought by some to be an important component for AI safety in advanced AIs.” Self-explainable AI models should provide confidence levels for both their output and their explanation. Applicability and domain analysis is especially important “for AI systems where robustness and trust are important, so that systems can alert their user if they are asked work outside their domain of applicability,” Elton concludes. An obvious example would be health care, where errors can result in irreparable damage to health. But there are plenty of other areas such as banking, loans, recruitment, and criminal justice, where we need to know the limits and boundaries of our AI systems. Much of this is still hypothetical, and Elton provides little in terms of implementation details, but it is a nice direction to follow as the explainable AI landscape develops.
<urn:uuid:301851b0-c9a6-4ee9-88a3-b6a482f97b8a>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/06/15/self-explainable-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00290.warc.gz
en
0.955064
1,937
2.78125
3
About 40 percent of households across the globe now contain at least one IoT device, according to Avast. In North America, that number is almost double, at 66 percent, bringing with it an associated growth in cybersecurity risks. The findings have been published in a new research paper “All Things Considered: An Analysis of IoT Devices on Home Networks”. The research is the largest global study to date examining the state of IoT devices. Avast scanned 83 million IoT devices in 16 million homes worldwide to understand the distribution and security profile of IoT devices by type and manufacturer. The findings were then validated and analyzed by research teams at Avast and Stanford University. “The security community has long discussed the problems associated with emerging IoT devices,” said Zakir Durumeric, assistant professor of computer science at Stanford University. “Unfortunately, these devices have remained hidden behind home routers and we’ve had little large-scale data on the types of devices deployed in actual homes. This data helps us shed light on the global emergence of IoT and types of the security problems present in the devices real users own.” The research reveals a complex picture of the IoT ecosystem and subsequent cybersecurity challenges in homes across the world. Key findings include: - North America has the highest density of IoT devices of any region, with 66% of homes possessing at least one IoT device, compared to the global average of 40%. - Even with over 14,000 IoT manufacturers worldwide, 94% of all IoT devices are manufactured by just 100 vendors. - Obsolete protocols like FTP and Telnet are still used by millions of devices; over 7% of all IoT devices still use these protocols, making them especially vulnerable. Distribution of IoT vendors around the globe The paper further explored the distribution of global IoT vendors. While there is a very long tail of over 14,000 global IoT vendors, market dominance is limited to only a few. “A key finding of this paper is that 94% of the home IoT devices were made by fewer than 100 vendors, and half are made by just ten vendors,” says Rajarshi Gupta, Head of AI at Avast. “This puts these manufacturers in a unique position to ensure that consumers have access to devices with strong privacy and security by design.” By hardening these devices against unwanted access, manufacturers can help prevent bad actors from compromising these devices for spying or denial of service attacks. Significant security risks not being addressed As part of the study, Avast identified that a significant number of devices use obsolete protocols such as Telnet and FTP. Seven percent of all IoT devices support one of these protocols. This is also the case for 15% of home routers, which act as a gateway into the home network. This is a serious issue as when routers have weak credentials they can open up other devices and potentially entire homes to an attack. There is little reason for IoT devices to support Telnet in 2019. Yet, the research shows that surveillance devices and routers consistently support the protocol. Surveillance devices have the weakest Telnet profile, along with routers and printers. This aligns with historical evidence such as the role of Telnet in the Mirai botnet attacks that suggests these kinds of devices are both numerous and easy to compromise.
<urn:uuid:2ac6c6c6-3a4a-4e4b-9960-d40e9619de78>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/06/20/iot-cybersecurity-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00290.warc.gz
en
0.949393
677
2.5625
3
Cyber Resilience Importance! by Anastasios Arampatzis Digital transformation enabled organizations to disrupt their markets, improve productivity and make informed decisions, while reducing costs. The proliferation of emerging technologies, such as multi-cloud platforms, Internet of Things (IoT), containerization, microservices, and big data has given the society the chance to become more connected and economies more prosperous. However, these benefits come at a price. These new technologies present new risks and vulnerabilities, expanding the corporate attack surface. The security of digital systems and information is crucial to every enterprise or organization. Threats like data breaches or cybersecurity incidents can cause companies severe damages. These attacks may attempt to destroy, expose, or obtain unauthorized access to corporate systems and sensitive data. The confidentiality, integrity and availability of corporate digital assets are at risk. This alarming increase of cyber-attacks should force companies to be armed to manage such risks. Focusing on traditional cybersecurity measures and protections seems no longer adequate to protect businesses from the spate of sophisticated attacks. It is time for a different approach. It is time for businesses to have cyber resilience. What is cyber resilience? Cyber resilience is the ability of an organization to prepare, respond, and recover from cyber-attacks. An organization has cyber resilience if it can defend itself against these attacks, limit the effects of a security incident, and ensure business continuity during and after these attacks. According to the Presidential Policy Directive 21 (PPD-210), “Critical Infrastructure Protection and Resilience”, signed by former US President Barack Obama in 2013, the word resilience means “the ability to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions.” We must be careful not to confuse resilience with recovery. Recovery is to return to a previous healthy state. Instead, resilience is limiting the impact of the security breach, keeping the operation continuous despite the threat, and continuously plan strategies and practices to protect operations once criminals attack. It is also important to highlight the difference between cyber security and cyber resilience. While cybersecurity’s main goal is to protect corporate digital resources (i.e. systems and data), cyber resilience focuses on making sure the business is delivered. Resilience’s outcome is keeping business goals intact rather than the IT systems. The four pillars of successful cyber resilience strategy A successful cyber resilience strategy should be based on the following four pillars: Manage and protect Develop the ability to identify, assess, and manage risks associated with cyber-enabled systems, including those across the corporate supply chain. In addition, protect the corporate digital assets from cyber-attacks, system failures and unauthorized access. Identify and detect Use of continuous monitoring to detect anomalies and potential data breaches and security incidents before they develop and cause significant damage. Respond and recover Implement an incident response program to ensure business continuity even in the event of a cyber-attack and get back to business as usual as quickly and efficiently as possible. Govern and assure Ensure the cyber resilience program is overseen from the top of your organization and built into business as usual processes. Align the program with wider business goals. Components of cyber resilience strategy Cyber resilience aims to secure the whole organization dynamically. It is a preventive measure to counter human error, vulnerabilities in software and hardware, and misconfiguration. The goal of cyber resilience is to protect the organization in a holistic manner, while understanding that there will be insecure parts, no matter how robust security controls are. To do so, a cyber resilience strategy should comprise of the following components: The more technology advances, the more sophisticated cyber criminals become. The organization should plan steps to defend itself against all sorts of threats. In the event of a security incident, the organization must be able to return to regular operations as quick as possible. This means to have infrastructure redundancies and data backups across different locations to cope with any type of emergency, a natural disaster, or a cyber-attack, whenever and wherever this happens. It is also recommended that the organization runs drills to ensure that everyone knows what their role is in the event of a cyber-attack. This will strengthen the overall organization’s cyber resilience. While planning is important, adaptability is paramount. Organizations must be able to evolve and adapt to new tactics that cyber criminals come up with. It is recommended to invest in continuous monitoring solutions for the security team to recognize security issues in real-time and take immediate action. An organization’s durability is its capability to effectively operate regular and routine business again after a security breach. An organization can improve its cyber resilience with system improvements, regular reports, and updates. Why is cyber resilience important? Cyber resilience is important because traditional security measures are no longer enough to ensure adequate information security, data security, and network security. CISOs and IT security teams are aware that attackers will eventually gain unauthorized access to their organization. This should not serve as a pessimistic finding, rather as an everyday alarm to be able to respond to and recover from security breaches as well as to prevent them. The need for cyber resiliency was summed up by Lt. Gen. Ted F. Bowlds, former Commander, Electronic Systems Center, USAF: You are going to be attacked; your computers are going to be attacked, and the question is, how do you fight through the attack? How do you maintain your operations? The benefits of cyber resilience Successful cyber resilience strategies introduce many benefits for all organizations, from small or medium businesses to large corporations. - Enhanced security Cyber resilience does not only help with responding to and surviving an attack. It can also help an organization develop strategies to improve IT governance, foster safety and security across critical assets, improve data protection efforts, avoid the impacts of natural disasters, and reduce human error. - Reduced financial loss Regardless of how good your security is, the fact is no one is immune to cyberattacks or misconfiguration. The average cost of a data breach has skyrocketed to $4.77 million globally, enough to make many small to medium size businesses go bankrupt. In addition to financial costs, the reputational impact of data breaches is increasing due to the introduction of GDPR and other data protection and privacy laws in numerous countries and stringent data breach notification requirements. - Regulatory and legal compliance Meeting security requirements imposed on data protection and critical infrastructure regulation is a competitive advantage and a valuable benefit in integrating cyber resilience in an organization. The EU Network and Information Systems (NIS) security directive requires every operator of a critical infrastructure (i.e. finance, healthcare, transportation, water and electric grid, and aviation) “to take appropriate security measures and to notify serious incidents to the relevant national authority.” In addition, the GDPR requires for strict protections to safeguard data privacy and imposes huge fines in case of deviations. Being compliant with regulations places a sense of trust to customers. - Improved security culture Security is everyone’s responsibility. When people are inspired and motivated to take security seriously in their organization, sensitive information and physical assets are at less risk. The organization should foster the security hygiene behavior to reduce human errors that expose sensitive data. - Protect from reputational damage Poor cyber resilience can irreversibly damage your organization’s reputation. Cyber resilience prevents an organization from public scrutiny, fines from regulators, and an abrupt reduction in sales, or worse, loss of business. - Build trust across business ecosystem It is essential to have cyber resilience to maintain trust from suppliers and the customers. Trust takes years to build it, but it can be destroyed in seconds. If an organization has an ineffective approach to cyber resilience, it can potentially experience severe damages, including restitution to suppliers and customers whose confidentiality has been breached. - Improved security team One of the underemphasized benefits of cyber resilience is that it improves the daily operations of security teams. An organization with a hands-on security team not only improves the ability to respond to threats, but it also helps to ensure day-to-day operations are running smoothly. How ADACOM helps ADACOM has a well-established approach to information resilience, supported by a robust implementation framework. We follow a holistic approach towards protecting all types of sensitive information, in all phases of the information lifecycle throughout all business verticals, regardless of the underlying business and technology ecosystem. Our aim is to maximize the resilience of critical business information and keep information trustworthy even when the organization is under stress. ADACOM’s information resilience approach has the following differentiation factors: - Holistic approach to digital & information security risk management, including Compliance - Create value by integrating digital and information resilience with business processes - Use of methodologies which could be easily applied - Customized deliverables which could be easily adopted - You may learn more by contacting our experts.
<urn:uuid:ac4feabd-e9ef-482c-a895-127084181bbd>
CC-MAIN-2022-40
https://www.adacom.com/news/press-releases/cyber-resilience-importance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00290.warc.gz
en
0.935058
1,858
2.796875
3
Everything you need to know about hosting and cloud computing Cloud computing has allowed many businesses to profit from its flexibility, power, and its relatively low cost of entry. It has been used by companies to outsource some or all of their IT applications and infrastructure since as early as the 1990’s, with Salesforce being the first major company to offer SaaS over the internet in 1999. Today, companies can take advantage of a wide variety of software and infrastructure solutions, from the ability to use dedicated cloud based hardware, to ready-made shared solutions like Dropbox. Cloud computing is a model for the access of shared, remote computing resources via the network. Cloud computing services are defined by several key characteristics: They can be scaled up or down according to a user’s needs. They are available on a wide variety of devices and in many locations (tablets, PCs, mobile devices, workstations) They can be provisioned in real time without human interaction and provide a means to automatically optimise and report the use of bandwidth, storage, and other computing capabilities. They can be deployed in the public cloud, meaning the infrastructure is open for use by the general public, or in hybrid private models, which provide access to private resources in addition to shared resources. Choosing a cloud computing model. Cloud computing comprises several service models. SaaS, PaaS, and IaaS all offer relative benefits and drawbacks. Decide which is right for your company by using the information below. SaaS, or software as a service, is a means of distributing applications to users via a subscription model. It essentially provides on demand software to companies through the cloud. This model can be extremely beneficial for companies that require a centralised database that many users can access. It also often provides increased flexibility, improved performance, and reduced costs. Some common SaaS applications include: • Customer Relationship Management • Email Client • Word Processing • CAD Software • Human Resources Management Platform as a service, or PaaS, offers a remote computing platform and solution stack to companies. This provides increased power over SaaS models without the need to buy, manage, or maintain the underlying infrastructure. In PaaS, the user develops their own applications with tools and libraries offered by the provider. This makes it useful for companies that need to create and deploy custom applications, but don’t have the manpower or need for full infrastructure control. • Application Design & Development • Custom Application Deployment • Web Service Integration • Database Integration • Remote Storage Infrastructure as a service is the most powerful and flexible of the cloud computing models. It provides physical computers or virtual machines to users, allowing them to install operating systems and applications as needed. This means that the potential uses for the services are practically limitless. Although IaaS gives users the greatest flexibility in developing and deploying their own applications, it also requires additional effort and expertise to operate so should only be used if your company’s needs can’t be met by an SaaS or PaaS solution. What are the benefits of cloud computing? Cloud computing is widely used by businesses ranging from international corporations to local companies. The reasons for this popularity are numerous. Cloud computing offers many benefits, including reduced cost, increased company focus, flexibility, and greater reliability. Below you’ll find some of the top reasons your company should consider moving to the cloud. Cloud computing is usually more reliable. Downtime can be extremely costly, particularly for companies that rely on IT infrastructure for critical business functions. The costs of downtime can range from $90,000 per hour for companies in the media sector to $6.48 Million per hour for large online brokerages. That’s why it’s so important for your company to find an IT solution that minimises downtime. One of the most appealing aspects of on-site solutions is that they put reliability entirely within your company’s control. Although this has the potential to provide more reliability than cloud solutions, in practice that is rarely the case. Most cloud computing providers have a large staff of IT experts dedicated to ensuring a certain amount of uptime. Most providers also offer generous uptime guarantees. A cloud computing provider with a 99.95% uptime guarantee will have at most 21.6 minutes of downtime each month, whereas half of all fortune 500 companies experience at least 1.6 hours of downtime each week. . Cloud computing is often more cost effective. In some cases, on-site computing may be more cost effective. This can be true for companies that already have existing IT infrastructure, are focused on IT, or need on-site solutions for other reasons. However, for most companies, cloud computing provides a more cost-effective solution. Many companies are attracted to cloud computing because of its relatively low cost of entry. Servers can cost thousands of dollars and the environment controlled facilities they are housed in can be even more expensive. Cloud computing solutions require little to no upfront investment and may also be more cost-effective over the long term for many companies. On-site solutions require additional staff, maintenance, and recurring hardware replacement costs. Because cloud solutions can offer economies of scale, they can often provide a lower price for similar service. Cloud computing offers increased flexibility. One of the greatest advantages of cloud computing is its ability to quickly scale with a company. As your business needs more resources or has to scale back, it can adjust cloud computing resources as necessary. This gives you the flexibility you need to quickly grow without tying resources down to applications that may not be necessary in the future. This is a major advantage over on-site solutions as they require significant investment to scale up without a means for quickly reducing that expenditure as needed. This article is extracted from our Complete Guide On Cloud Computing. To continue reading download the guide. Mell, Peter. Grance, Timothy. The NIST Definition of Cloud Computing. National Institute of Standards and Technology. Retrieved May 2014. Martinez, Henry. How Much Does Downtime Really Cost? Information Management. Retrieved May 2014. How Much Does Website Downtime Really Cost? Alerta. Retrieved May 2014. Finnie, Matthew. How Reliable is the Cloud? Huffington Post UK. Retrieved May 2014.
<urn:uuid:33992a23-f867-4b37-9b65-53892038f142>
CC-MAIN-2022-40
https://macquariecloudservices.com/blog/everything-about-hosting-and-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00290.warc.gz
en
0.941491
1,356
2.703125
3
Microservices architecture gives developers a flexible, scalable, agile solution for building high-performing apps that quickly deploy. It has been widely adopted because of its game-changing benefits. However, developers must overcome some challenges and risks to implement solutions with microservices effectively. Microservices are more complex than monolithic builds. DevOps teams have to manage multiple languages and frameworks, and new service dependencies are often incompatible with their existing tools. Data consistency is another challenge since each service has its own database and transaction management protocols. Microservices also have increased security risks due to their modular nature. The sheer volume of data exchanged between containers exposes more of a system to the network, which provides a broader attack surface. Additionally, the highly-replicable nature of microservice containers allows weak spots to spread quickly, often across applications that contain duplicate code. Challenges and Risks of Microservices Architecture The competitive and fast-paced nature of software development can lead teams to adopt solutions that may end up causing more problems than they fix. Unless developers understand how to overcome the challenges and mitigate the risks microservices pose, they may find themselves mired in a bog of complexity that brings progress to a screeching halt. Some of the issues developers face with microservices include: Because each service is deployed independently, coordinating operations across a path of services is often difficult. Additionally, multiple services may be involved in fulfilling a request, so root cause analysis can be impossible with traditional monitoring tools. Scaling and optimizing independently hosted and deployed services requires complex coordination of different components across separate servers. Although scalability is one of the frequently cited benefits of microservices architecture, it can be challenging to achieve. Fault tolerance must be optimized for each microservice. One component failure can affect the entire system, but the optimal procedures and practices will differ for individual services. Limited Environmental Control There is a bare minimum of centralized management in a microservices architecture, making environmental control a challenge. While one set of procedures applies to a monolithic environment, a microservice environment will need different ones for each environment. Testing the individual components of microservice architecture along with their interdependencies exacerbates the complexity of environmental control. Because testing needs to be incorporated at all phases of the software development lifecycle, this obstacle must be overcome early. Microservices are deployed across various cloud environments and communicate with each other through different infrastructure layers. This results in less control and increased obscurity of components. Not only does this microservice structure increase security vulnerabilities, but it also makes testing for vulnerabilities and network security more difficult. The distributed framework elevates the challenge of securing data because it’s technically difficult to control access and administer secured authorization to individual services. Data’s confidentiality, privacy, and integrity are also harder to maintain with a distributed framework. Data in microservices is continually moved, interacted with, and stored in different places for different purposes, which exposes more entry points for bad actors. Independent microservices are standalone applications that have to communicate with each other. If the infrastructure layers that facilitate resource sharing across services are poorly configured, the result is a suboptimal application with a slow response time. Open Source and Third-Party Code Vulnerabilities Open source software is frequently a component in microservices. In addition to the complexities inherent in segmentation, microservices have vulnerabilities related to third-party and open source code. The most significant issues to consider with open source code are security vulnerabilities and licensing liabilities. Open Source Code Vulnerabilities Open source code vulnerabilities frequently have security susceptibilities that are publicly known. Although patches are usually quickly released to fix these risks, in practice, almost 80% of libraries are never updated. As a result, outdated open source software can cause serious performance issues and security risks. The most common licensing liabilities associated with open source code are infringement and restriction. There are over 200 different types of open source software licenses, and they often conflict with each other. Developers who use open code in their builds must abide by these licenses. Some licenses, such as the copyleft license, require that all modified and extended program versions be free. Because open source code isn’t created under the same types of control as proprietary software, infringement can be an issue. Anyone can add infringed code to open source software and pass it along, which can violate intellectual property rights. Best Practices for Securing Microservices Architecture The DevSecOps team needs to integrate security and risk-mitigation measures into their practices at every stage of open source development. Developers can address application security challenges by following best practices for designing security into microservices architecture, including: Adopt the Principle of Least Privilege (POLP) Many microservices use container technology based on images that may contain vulnerabilities. Teams should regularly scan images to ensure they don’t contain any vulnerabilities. Additionally, to address the internal and external threat surfaces of containers, developers can implement POLP such as: - Only grant the minimum permissions required by each user or service role. - Eliminate privileged access to run services. - Restrict use of available resources, such as restricting container access to the host operating system. - Never keep confidential information on the container, Use Access and Identity Tokens Effective authentication and authorization are critical to providing secure access to microservice. This can include backend services, middleware code, and user interface. OpenID and OAuth 2.0 are two authentication systems that generate user tokens for secure access across distributed systems. Create an API Gateway Creating one entry point for all clients and systems across microservices will help overcome the cloud security challenges associated with the use of multiple technologies, interfaces, and protocols. For example, if all systems are connected to an API gateway, it can be used to filter requests to sensitive resources and perform authentication and authorization. An API gateway can also provide security features such as: - SSL termination - Protocol conversion - Caching requests Create a Software Bill of Materials (SBOM) An SBOM is a standardized list of all components used in a software build, including: - Data regarding sources - And other information An SBOM provides visibility into open source vulnerabilities and licensing requirements. A software composition analysis will keep all members of an organization updated on the latest versions of third-party software in use, any patches that need to be implemented, and relevant licenses. An SBOM should include data fields, automation support, and the practices and processes an organization uses. How an SBOM Helps Mitigate Risks A build is only as secure as the components it’s built on. An SBOM is a cornerstone of managing risks in the software supply chain and helps organizations: - Speed up the process of identifying and remediating vulnerabilities - Comply with government regulations - Meet attribution requirements of open source libraries - Eliminate redundant work for developers - Easily identify broken components - Comply with audit requests While the segmentation of microservices is responsible for some of the challenges in using them, it can be an advantage when it comes to security. Because each service is an isolated part of the application, each can be implemented, modified, maintained, extended, and updated without affecting other microservices. Other infrastructure layers, such as the database, should also be isolated. Microservices should only have access to their own data so that a malicious actor who gets access to one can’t launch a lateral attack. Automated Analysis and Testing of Microservices Automating code analysis and portfolio management will make the process much more efficient. Kiuwan offers two tools for end-to-end application security. These tools help teams identify vulnerabilities in application code security. Code Security (SAST) automatically scans code to provide security at every stage. It checks for source code defects — including laws, faults, bugs, or improvements — according to the following software characteristics: Tailored reports based on industry-standard security ratings such as OWASP and OWE allow businesses to understand their risks. Code Security generates automatic action plans to mitigate risk and manage technical debt. Insights (SCA) helps organizations manage open source risks with automated policies throughout the software development lifecycle. Insights will automatically generate a complete and accurate SBOM used in builds and applications. It eliminates the time-consuming process of manually creating a software inventory. Insights lets developers know when they are impacted by a security vulnerability alert or licensing conflict. Developers can then address security risks as they apply to applications. The unused features in open source deployments can cause dependency issues. SCA’s code quality analysis identifies unused code so it can be removed, reducing the risk of dependency problems. Get the Best, Comprehensive Security Solutions with Kiuwan Kiuwan offers the most comprehensive solutions for application security. The platform provides pinpoint accuracy that gives complete visibility and control of the entire application portfolio. Objective data reports give decision-makers a complete view of their software so that no application is deployed without the highest level of security. Kiuwan’s solutions integrate with an organization’s existing development systems and facilitate communication between teams. Their powerful, functional dashboards provide important information to stakeholders at a glance. Reach out today for more information about how Kiuwan’s customized plans.
<urn:uuid:7d788b5d-d69e-4655-ba81-5ad0ab8f1a95>
CC-MAIN-2022-40
https://www.kiuwan.com/overcoming-microservices-architecture-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00490.warc.gz
en
0.915466
1,966
2.859375
3
Social Media Spying By Cyber Criminals Is On The Rise You know those random friend requests that you get from people you’ve never met? They may not be entirely accidental or random at all — in fact, many cybercriminals deploy fake profiles in hopes of a friend request slipping through the cracks … and it’s working. In addition to social media spying, hackers regularly employ social engineering, malware-based phishing, old hardware hacking, and voice phishing (aka “Vishing”) to compromise user’s privacy and steal his or her information. Some of these methods may be unscrupulous, but they aren’t technically illegal — which makes it even more important for users to stay vigilant in their online activity and security efforts. Top Five Hacker Favorites: Data Collection Methods That Can Make Your Information Vulnerable Keep your information secure and mitigate your risk of a stolen identity by being aware of these most common methods — and take action when necessary to protect yourself against them: Social Media Spying With the advent of social media and the somewhat viral dependency many people have on it, it’s no wonder that cybercriminals are turning to social as a way to triage potential victims. A public forum on which soon-to-be victims spend hours sharing their personal information, social media is low-hanging fruit — a true hacker’s dreamscape. You can never be 100 percent certain that your information is secure on any of the most popular social media platforms: Security settings reset themselves; platform updates cause private profiles to become public; and unsuspecting users make connections with total strangers either by accident or because of sheer laziness. All of these instances open up a vault of private information that’s free for the hacker’s taking. Some of the more obvious details users should avoid revealing on their social media pages include personal information such as maiden names, birth dates and phone numbers. Of course, vacation status should never be advertised either — it can give thieves a perfect opportunity to track down your residence and take advantage of your absence. In social engineering hacks, cybercriminals portray themselves as a legitimate company or individual to gain a victim’s trust and exploit his or her need for a product or service. Industry experts recommend being especially cautious of urgent emails that request confirmation of payments, login details and social media updates. Old Hardware Restoration Experts recommend being extra-vigilant when throwing away or trading in old smartphones, tablets and computer hardware. Never get rid of these devices without being absolutely sure you have wiped them clean of personal data. If you are tossing an old hard drive, be sure to completely destroy the data before you part ways with it. If you neglect this all-important step, you could be handing over your identity to a savvy cybercriminal that makes his or her living by restoring old user data. These days, malware has become one of the hacker’s most prolific tools of corruption. By making victims feel they need to subscribe to a security fix or patch to avoid losing valuable computer files, hackers are able to take over your computer hardware. In some cases, these cybercriminals take things a step further by making expensive ransomware demands in exchange for the release of personal information and files — and they won’t return your data until you pay up. Even worse, in some cases, they take the money and demand more before turning over a victim’s files. Vishing (aka voice phishing) Beware of cybercriminals that pose as government agencies via phone and voicemail. These hackers will often appeal to your sense of trust by offering urgent, confidential information, but before you can access it, you must first confirm it with sensitive personal data. Hammett Technologies is the trusted choice when it comes to staying ahead of the latest information technology security safeguards, innovations, and news. If you are concerned that you may be the victim of a cyberattack, contact us at (443) 216-9999 or send us an email at email@example.com for more information.
<urn:uuid:7cad31e6-32b4-4260-bc66-09c4081a9b32>
CC-MAIN-2022-40
https://www.hammett-tech.com/are-you-fb-friends-with-your-hacker/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00490.warc.gz
en
0.925526
925
2.796875
3
How Mobility-driven Machine Learning is Elevating eLearning in 2021 by Smitesh Singh, on Jul 5, 2021 6:01:11 PM Initially struggling with the onset of the pandemic, the education industry is now set to make records with the pace of digitalization and profitable yields of popular technologies, implementable and operable on smartphones. Machine learning, as has been the case in digital marketplace, is bound to revolutionize education, opening up a myriad of opportunities for schools and institutions to level up and build stronger walls for an exhaustive learner development through mobile and web apps. In this blog, we will take a look at some of the popular machine learning use cases that schools and institutions can channel through administrative apps and make education as effortless as possible for students, parents and teachers alike. Mobility-based Machine Learning Use Cases in Education - Infrastructure management on-the-go - Machine learning has made inroads into infrastructure management in numerous industries. Educational infrastructure, that includes connectivity, mobile devices, and proprietary learning platforms, is now fast gaining prominence after the onset of pandemic. It makes up for one of the most functional use cases of ML in education. In schools and institutions, it is vital that authorities are notified of discrepancies or anomalies in functioning of their digital learning platforms or apps to make sure student studies remain slick. For example, a notification service in the app can alert authorities of the downtimes and possible reasons for the same so as to help instigate prompt action. This is again done through an ML based analysis of continuously collected data streams from networks, historical anomalies etc. - Data analysis on learner mobile usage - Data collected by apps through cookies on smartphones can help in understanding the web surfing behavior of the students and intervene in cases that seem serious or objectionable. Furthermore, this data can also help authorities understand the level of engagement and depth that their content offers based on data that indicates students resorting to external academic research for certain concepts and doubt clearing. This can also help authorities keep a tab on the emotional well-being of students and keep the parents informed. - Machine learning for adaptive learning - Adaptive learning has been present since the beginning of mankind but its popularity and effectiveness in education has been quite recent. It involves watering down on depth and difficulty levels of quizzes, exams, or content in order to help students build a robust foundation in academics. Through data technology, this principle can be implemented through a well-worked data collection and feedback framework in the app, that can self-learn and automate adaptive learning. In conceptual learning, prompt individual student feedback on difficulty level of classes can help the algorithm offer content that is relatively easy and build upon it through multiple one-on-one sessions. - Individual student monitoring - Machine learning can help authorities look into individual student performance and be notified if a student is consistently performing poorly even after appropriate guidance and training. It can also help analyze individual interests of a student based on the subjects he/she excels at or like and the ones they despise. This can help teachers hone in on the strong points of students and prepare them for a suitable career. - Data analysis to deliver quality education - Improving the quality of education has been the number one use case of machine learning and data in education. Through frequent ML based exploration of indications that various data sets such as student performance, feedback and external reviews, etc. you can tweak your strategies, delivery frameworks and content to meet and surpass the industry standards. It can also help you wean off cliched content, app features and address student or parent concerns that surround your curriculum. - Revenue generation through mobility data - Machine learning and big data has been long helping enterprises capitalize on popular as well as uncharted strategies that help maximize profits. The popular ones include cross-selling, commissions on third-party content selling and advertisements. Through a range of well placed ads and selling of inhouse educational products such as mini-courses, certificates, and diplomas on subjects a student is interested in, organizations can help offer students a personalized and resource intensive training curriculum, and gain significant monetary advantage in the process. As traditional brick-and-mortar education models received a sobering blow due to the pandemic, the remote learning resort has rather turned out to be rejuvenating. These inventive technology use cases are making up for the lost times and stagnated routines of learners, encouraging stakeholders to further weave technology into education. It therefore makes great sense for enterprises in the education sector to ramp up their technology initiatives and partner with education technology companies that can help them expedite and uphold their ambitious foray into the technology driven education models. - Intercept Counterfeit Products with Product Authentication Mobile Apps in Supply Chain - On-premise to Cloud - An Inflection Point for Modern Digital Transformation - 6 Use-Cases of a Field Service Management App for Your Enterprise
<urn:uuid:c51af8c4-428e-40b3-8ec7-35e00a5dbae2>
CC-MAIN-2022-40
https://blog.datamatics.com/how-mobility-driven-machine-learning-is-elevating-elearning-in-2021
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00490.warc.gz
en
0.940988
995
2.71875
3
Bananas are rich in fiber, antioxidants and several nutrients. A medium-sized banana has about 105 calories. It can help moderate blood sugar levels after meals and may reduce appetite by slowing stomach emptying. They are fairly rich in fiber and resistant starch, which may feed your friendly gut bacteria and safeguard against colon cancer. Bananas may aid weight loss because they’re low in calories and high in nutrients and fiber. High in several antioxidants, which may help reduce damage from free radicals and lower your risk of some diseases. Depending on ripeness, bananas harbor high amounts of resistant starch or pectin. Both may reduce appetite and help keep you full. Unripe bananas are a good source of resistant starch, which may improve insulin sensitivity. Eating a banana several times a week may reduce your risk of kidney disease by up to 50%. It help to relieve muscle cramps caused by exercise. They also provide excellent fuel for endurance exercise. For more tips, follow our today’s health tip listing.
<urn:uuid:0fcdfa90-d43a-41a2-864f-58288aa5d7b0>
CC-MAIN-2022-40
https://areflect.com/2019/06/14/todays-health-tip-benefits-of-bananas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00490.warc.gz
en
0.94373
214
2.5625
3
The findings, described in a paper in Nature as we speak, might assist scientists be taught extra about how human embryos develop and supply insights into ailments, in addition to offering an alternative choice to animals for testing, in keeping with the researchers. The brand new mannequin embryos, which bypass the necessity for sperm or egg cells, have been developed within the lab alongside pure mouse embryos. They mirrored the identical levels of improvement as much as eight and a half days after fertilization, creating beating hearts and different foundations of organs, together with the neural tubes that ultimately flip into the mind and spinal twine. “I feel it’s a serious advance,” says Leonardo Beccari from the Heart for Molecular Biology Severo Ochoa, in Madrid, who was not concerned within the analysis. Learning how mouse stem cells work together at this level in improvement might additionally present invaluable perception into why human pregnancies fail through the earliest levels, and easy methods to forestall that from taking place. “That is actually the primary demonstration of the forebrain in any fashions of embryonic improvement, and that’s been a holy grail for the sector,” says David Glover, analysis professor of biology and organic engineering at Caltech, a coauthor of the report. Stem cells are capable of become specialised cells, together with muscle, mind, or blood cells. The artificial embryos have been made from three varieties of cells from mice: embryonic stem cells, which kind the physique; trophoblast stem cells, which become the placenta; and extraembryonic endoderm stem cells, which assist to kind the egg sac. The embryos have been developed in a man-made incubator created by Jacob Hanna of the Weizmann Institute in Israel, who just lately saved realistic-looking mouse embryos rising in a mechanical womb for a number of days till they developed beating hearts, flowing blood, and cranial folds. Hanna can be a coauthor of the brand new research. By mimicking the pure processes of how a mouse embryo would kind inside a uterus, the researchers have been capable of information the cells into interacting with one another, inflicting them to self-organize into constructions that progressed by way of developmental levels to the purpose the place they’d beating hearts and foundations for the complete mind.
<urn:uuid:aa106078-0fdb-4cb5-94b7-fa77f9cd9e9c>
CC-MAIN-2022-40
https://blingeach.com/scientists-have-created-artificial-mouse-embryos-with-developed-brains/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00690.warc.gz
en
0.939121
481
2.984375
3
Your smartphone chimes as you step outside your front door on the way to work, notifying you of an incoming delivery. You put on smart-glasses and look up at buzzing drones. One detaches from the swarm and descends. It’s carrying your monthly razor subscription kit. An icon pops in your peripheral vision — you’ve been charged $10. Delivery received, a text below says. A security camera recognizes you in the yard and calls for your car. The garage door opens. The car drives itself upfront to pick you up. That’s the kind of word we may all live in soon, and it will be made possible by 5G. The new Mobile Generation 5G is a new generation mobile network that is forecasted to replace 4G in the next 10 years or so. Evolution of a communication protocol that started with “1G” transmitting analog voices in the 1980s, 5G has the potential to revolutionize medicine, manufacturing, cloud computing, and autonomous vehicles. It’s a huge deal in communication and can bring that glittering Hollywood sci-fi vision of the future one step closer to reality. 5G, is, in fact, so much of a leap over 4G, delivering up to 100 times more network capacity, that we can only imagine what kind of innovations it will facilitate. But one area that we know for certain it will supercharge is the Internet of Things (IoT). There are now almost 9 billion active IoT devices in early 2021. This number is expected to nearly triple by 2030, reaching almost 25.5 billion. That’s a lot of devices, most of them simple computers with a primitive chip and basic security. Moreover, by that time, each may be able to deliver peak data rates of up to 20 gigabytes per second, all thanks to 5G. When it comes to potential innovation, that’s a tantalizing thought. But when it comes to DDoS security, it’s a frightening one. The DoS Threat Distributed Denial of Service attack is a form of cyber threat, created when a hacker floods a network with computer-generated traffic, creating a kind of traffic jam. Because all available bandwidth is used by bot requests, the DDoS target-resource overloads and appears unresponsive to users. The resource can’t function. Cybersecurity experts detected 4.8 million DDoS attacks in the first half of 2020, a 15% increase from the same period in the year prior. Most experts agree that this upward trend will continue. Meanwhile, attacks are getting ever more sophisticated. DDoS campaigns are used by threat actors in a variety of ways. APTs use DDoS as a diversion to pull attention from a contemporary attack, usually striving to steal sensitive data or banking details. Some DDoS campaigns carry a ransom, and some are launched by business owners to handicap competitors. The cost of launching a DDoS attack today starts from as low as $7 per minute, one research found. At the same time, the increasing availability of DDoS-as-a-service tools makes it easy to launch an attack, even for non-technically savvy users. It’s as simple as signing up on a website, specifying the target URL, and clicking “start.” This concerns even advanced DDoS attacks like UDP floods, SYN floods, or IP fragmentation. These attacks almost always use botnets — networks of infected devices that do the hacker’s bidding — to create a tsunami of traffic requests. These networks include PCs, smartphones, and Android tablets, but also — and increasingly commonly — IoT devices, which are often the easiest to compromise. An Explosive Combo It’s only a matter of time when IoT devices proliferate. Equally, 5G will replace the current network protocol in the upcoming years. The question is, will cybersecurity be ready? A typical 4G upload speed in the real world hovers around 25 Mbps. For 5G, this same number may top 1 gigabit per second. What this means for DDoS and botnets is simple: Every infected device will have the potential capacity to generate at least 40x the amount of network requests it does today. With a substantial chunk of the IT industry still unprotected from DDoS, the consequences can be dire, especially if we consider the applications of 5G. Expected to be used in emergency communications, smart cities, and smart factories the Internet will play a vital role in manufacturing, medicine, communication and more. Global DDoS threats, leveraging 5G throughput can inflict great damage and increase global downtime. They can sabotage automated manufacturing processes by hindering latency-sensitive applications and cutting the connection to the cloud. Attacks targeting smart-cities can interfere with critical components like traffic-control systems, endangering public safety. Conventional DDoS protection relies on scrubbing centers that receive, analyze and cleanse malicious traffic. So far, this strategy has been effective. But the attacks that are to come when 5G is widely deployed will make the current threats seem like ripples next to 25-meter waves. A network of 100,000 infected devices will be capable of creating terabit-level attacks, and 100,000-unit strong botnets are no novelty even today. Effective protection in the future will require a paradigm shift in the way we approach DDoS security. Protection coverage will need to include all traffic and connected devices, rather than only reaching high-value clients. Proactive and full symmetric protection and DDoS filtering at the network perimeter will need to be employed. Recent advancements in router silicon allow development of next-generation chips, that are capable of terabit-level forwarding capacity and improved packet filtering. One strategy is to move away from the reliance on centralized scrubbing centers and instead create a distributed perimeter of next-generation routers that will pick up the network filtering functionality. Advances in AI and Big Data analytics will enable us to reliably detect DDoS signatures or other traffic anomalies and program the routers with a reciprocal filter to alleviate the attack, lifting some of the burdens from scrubbing centers. Shifting the cybersecurity landscape towards a new paradigm will take time and will likely meet some friction from business owners, especially in smaller companies, where significant budget is often allocated to cybersecurity only after an attack takes place. The technology, however, is already here – as is the need for action.
<urn:uuid:691754dc-4729-4893-939a-64c45d9dd270>
CC-MAIN-2022-40
https://cybersecurity-magazine.com/what-does-5g-mean-for-global-ddos-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00690.warc.gz
en
0.921506
1,331
2.671875
3
As a corporate network admin or security professional, you probably think of yourself as one of the good guys in the cyber world. And that means you probably rarely venture over to the wrong side of the virtual tracks, where the bad guys hang out. Sure, you’re aware of and understand the old adage that you should “know thine enemy,” but even if you wanted to check out what goes on down there on the underside of the Internet, you’re most likely too busy just trying to keep your own network and systems secure to really delve into it. Besides, bad things happen to good people when they wander into crime-ridden neighbors, and that’s just as true online as out there in the real world. That said, the “head in the sand” approach doesn’t work well for ostriches or humans. Cautious awareness of the dangers that exist will help you to avoid them. Let’s take a wary look, then, at the mysteries of the “dark net” and how it can threaten the integrity and security of your corporate network. Into the darkness First of all, let’s distinguish between dark nets and the dark web. Although the two terms are sometimes used interchangeably, they aren’t the same thing. In this context, the word “dark” has a connotation of evil, sinful, immoral or illegal. Networking in the dark Technically, dark nets are the networks on which the dark web is hosted. As you already know, the Internet is not a single entity; it is made up of many different networks that communicate with each other (a network of networks, or internetwork). Those networks are made up of servers, clients, switches, routers, and the cables or wireless equipment and airwaves that connect them together. A dark net is an IP network connected to the Internet, like any other. A dark net is designed to be “hidden” from the rest of the Internet. Of course, there are many networks connected to the Internet (such as corporate intranets). But those networks are built to limit communications to groups of known users. Dark nets, on the other hand, are designed to provide anonymity to their users. “Dark” traffic is routed through multiple layers of encrypted relays to hide its origins and its content. It’s also the content of the traffic going across a dark net that makes it different – which brings us to the dark web, where those who know how can access that content. Oh, what a tangled web The dark web is sort of like a parallel universe that coexists with the public web (sometimes called the visible web) without ever revealing itself to those on the other side, even though they both use the same Internet infrastructure as their foundations. Regular public websites are hosted on web server software that runs on organizational or personal computers connected to the Internet, and all of those millions of web pages together make up the world wide web as we know it. The dark web consists of websites running on computers connected to a dark net. These servers are not indexed or accessed by the standard search engines, and Internet users can’t find or view them without special software and accounts. The dark web is (a small) part of the larger entity called the deep web, which we’ll discuss next. Getting in too deep Lying between the visible, legitimate web and the well-obscured dark web, sort of like the Bathypelagic or midnight zone in the ocean that lies between the sunlight zone and the abyss, is a “place in cyberspace” called the deep web. While its resources, like those on the dark web, are not discoverable with Google or Bing, you can get there with a standard web browser such as Chrome or IE – if you know the direct link/IP address of the websites and have the proper credentials to sign in. There is a lot of confusion as to what the deep web is and isn’t. If you watch TV crime series, you might think the deep web and dark web are the same thing, but they’re not. If you think of the dark web as analogous to a criminal operation that takes place behind locked doors (or maybe even literally underground), you might think of the deep web as a little like the streets of an urban neighborhood where illegal activities are sometimes carried out. There are some shady dealings going on, but there are also many good people and many legit businesses there. The most accurate and broadest definition of the deep web is any web site that can’t be accessed through search engines, including those web pages that are merely password protected. That would include the inner pages of for-pay sites, corporate employees-only or customer-only sites, and many more. There are some, though, who consider a site, including its protected pages, to be part of the visible web if you can use a search engine to get to its login page. The latter group sees the deep web as the home of porn sites, black hat hacker sites, software and music/movie piracy sites, sites that trade or questionable or illegal goods, etc. (as long as the site can be accessed via a regular web browser). There is some of this happening on the deep web. However, hard-core criminals who engage in serious crimes such as human trafficking, moving large numbers of weapons or drugs, counterfeiting money or passports, organized crime groups, hostile nation-states, terrorists, hit men (or women) for hire, kiddie porn peddlers and snuff filmmakers are more likely to go farther under the surface and seek the greater identity protections of the dark web. In actuality, the deep web makes up the largest portion of the web and contains things like medical and legal documents, academic and research information, government resources, banking and financial records, and all sorts of access-restricted databases and repositories. Hitting the (Silk) Road A dark web entity that you might have heard of is Silk Road, which was (black)marketplace on the dark web that was shut down by the FBI several years ago for sales of illegal drugs and its founder/owner was prosecuted and convicted of narcotics trafficking and money laundering, among other charges. In an interesting aside, two of the federal undercover agents who built that case were later charged with wire fraud and money laundering themselves, in connection with funds they were allegedly given by the accused (and kept). In addition to drugs, other legal and illegal goods were sold through Silk Road. The site’s terms of service actually prohibited sales of child pornography, stolen credit cards, and weapons. It was estimated that the site enabled over $15 million per year in transactions, all of which were conducted with Bitcoin as the currency, and the FBI said there were approximately 1,229,465 transactions completed on the site before it was closed down. Silk Road’s successor, Silk Road 2.0, was also shut down after approximately a year in business. Other dark web marketplace sites come and go; by their very nature, they are prone to impermanence. This is one of many reasons to be careful about dealing with merchants you find on the dark web. Origins of darkness According to most sources, “onion routing” for the purpose of communicating over the Internet without detection actually got its start, like the Internet itself, with the U.S. military. Specifically, it was the Naval Research Laboratory that developed the Tor project for covert communications with intelligence assets. This took place in the mid-1990s, and the Navy released the code for Tor to the public in 2004. In 2006, a group of computer scientists in Massachusetts formed a non-profit organization to maintain Tor. The Tor project is funded in large part by the U.S. government. Dangers of the dark web Although it’s impossible to get a real count, many experts estimate that the “invisible” web that search engines don’t see – both dark web and deep web – is many times larger than the visible “surface web.” Many of the people who inhabit the dark web are dangerous characters who have no qualms about lying to you, stealing from you, or even in extreme cases causing you physical harm if you cross them (or even if they only suspect that you might). The dark web is designed to allow you to surf and make transactions anonymously, but the flip side of that coin is that the identities of other parties are similarly protected so that you never really know with whom you’re dealing. Not everyone in a bad neighborhood is a bad neighbor Despite the connotations of its name, not everything that happens on the dark web is illegal or morally reprehensible, and not everybody who ventures into its nether regions has malevolent intent. In fact, whistleblowers may use it to protect their identities as they seek to unmask illegal activities. Citizens in countries under the rule of oppressive regimes may use it to communicate with the outside world. You can also be sure that law enforcement agencies are on the dark web, lurking undercover to unearth evidence of crimes. Curiosity seekers, too – especially computer geeks – may wander over to the “dark side” just to see what’s going on there, as do writers researching books, security professionals assessing the threat, and others who aren’t members of the ranks of “bad guys.” It’s especially important for anyone who’s considering dipping his/her virtual toes into the murky waters of the dark web to be aware of the dangers and how to take measures to protect against them. Assessing the risks Beyond the obvious dangers that are associated with rubbing digital shoulders with drug dealers and paid assassins, it’s important to understand how dark web client software works. Files on the dark web are often shared through peer-to-peer (P2P) networks. Why does this matter? Because it means that just as you’re connecting through other users’ computers (which makes it hard to track you), other people are connecting through your computer. In a full-fledged P2P network, all of the computers act as both clients and servers. The servers that you connect to may or may not have security measures implemented; you have no way to know. They may have viruses or host malware that can be passed on to your computer. There is essentially no infrastructure security; protections depend on the security of the application (such as the encryption of data). You also have no way of knowing whether other users are really who they say they are since authentication is difficult or impossible on a network designed to provide anonymity. The integrity of P2P networks is based on the trustworthiness of the peers, but how can you trust someone when you don’t even know who he/she is? It’s important to remember that privacy and security, while frequently lumped together in the IT world, are not the same thing. While your activity on the dark web may be private, that doesn’t mean it’s necessarily secure. Many hackers and scammers are lurking in the depths of the dark web, set on stealing your personal information. And if you’re tempted to engage in anything that’s illegal or falls into a “grey” area, be aware that the FBI and NSA have exploited vulnerabilities in the Tor protocols and can and will track users. It’s not illegal merely to access the deep or dark web, but doing so can raise a red flag to law enforcement. How to get there from here The web browsers and search tools that we use to surf the “above board” web are blind to the existence of the deep web, including the dark web. To cross over into that shadowy world, you need special clients such as Tor (Tor stands for The Onion Router, so named because of the layer after layer of encryption that is applied to dark web transmissions). There are other deep/dark web clients, but Tor is the most well-known. It’s highly recommended that if you wish to experiment with accessing the dark web, you create a secure, isolated environment for doing so: set up a virtual machine and install an operating system that will be used only for dark web access. Install sandbox software such as Sandboxie and run the Tor browser in the sandbox. And the alternate method is to install VPN software such as NordVPN to run Onion over a virtual private network, log into an Onion over VPN server, and run the Tor browser to connect through the VPN. Either way, before you open the browser, ensure that webcams and microphones on the computer you’re using are disabled or disconnected. It’s also a good idea to disable scripting and shut down any applications and unnecessary services running on the computer that could be exploited. It’s even better if you can use a “throwaway” physical machine that’s dedicated only to this purpose, on an Internet connection that’s completely separate from your home network and run your VM on this. There are Tor browsers for mobile operating systems (Android and iOS), but their security is questionable. The best way to access hidden sites is with a PC, with a good firewall and anti-malware software running. Under the hood: how it works The Tor network software builds a circuit of encrypted connections through relays, and the Tor browser enables the use of Tor on Windows, MacOS, or Linux. The Tor web browser is based on Mozilla Firefox and uses the Tor proxy. You can run it from a USB stick without installing it. The browser automatically routes requests through the Tor network. So how do you find sites on the deep/dark web? These sites don’t use public DNS servers to resolve names to IP addresses as you do on the visible web. Dark websites live in the .onion top-level domain, which was designated as a special-use TLD for implementing anonymous services. The Tor Onion Service Protocol provides the way for hidden services to advertise their existence in the Tor network so that you can find them. Random relays on the network act as introduction points and public key cryptography is used to keep the service’s IP address private. The client downloads a descriptor for an Onion service, which gives it the introduction points and public key to use to contact the service. The communications go over a Tor circuit, and the client stays anonymous. The client and server connect through a rendezvous point established by the client and protected by a one-time secret. The connection consists of six relays, three chosen by the client and three by the server. The Onion Directory and the Hidden Wiki are some resources you can use for finding sites on the deep/dark web. Note that on the dark web, especially, sites come and go to a listed site may have disappeared temporarily or permanently. Protecting the company network Given all of the above and the precautions that are advised for exploring the invisible part of the web, it goes without saying that you should never access dark websites from your work computer/network. And of course, that goes for all of your users as well. Many of the “geeks” or amateur hackers who venture into the dark web don’t have criminal aspirations but do it because it’s the “cool” thing to do. But they aren’t security experts, and they know just enough to be dangerous, or rather, to put themselves (and the organization whose network they’re using) in danger. To protect your organization’s systems and networks, it’s important to have company policies prohibiting or controlling any use of Tor or other dark web clients on the company network. You should also have technological controls in place to enforce those policies by blocking Tor and other deep web client traffic. There are a number of ways to do this, which are beyond the scope of this article. A quick web search will reveal many discussions of how to detect and block access to the dark web. Some of these works better than others. Firewall rules can be used to control access to various ports, sites, protocols, and applications, so a good firewall such as Kerio Control can be an important element in protecting your organization from dark web threats. SEO exercise: Listen to storytelling podcasts here
<urn:uuid:b1843f58-8c25-4d7a-b1d7-adc1a4b9c937>
CC-MAIN-2022-40
https://techtalk.gfi.com/the-dark-web-is-it-a-threat-to-your-organization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00690.warc.gz
en
0.949971
3,414
2.671875
3
Smarter Technology Solutions For A More Sustainable Future Landfills are full of single-use plastics, electronics, and packaging materials. The IT industry has contributed to the problem by dumping tonnes of old devices, empty ink and toner cartridges, and used packaging. We’re helping reverse that trend with our commitment to recycling in every facet of our business. Our recycling program starts with making sure packaging material is responsibly handled. We work with local vendors and with our suppliers to make sure all packaging materials that arrive at our warehouses in Western Canada are properly sorted and responsibly recycled. For end-of-life devices, we make sure no part of the device ends up in a landfill. To achieve this, we first attempt to donate used equipment to Computers for Schools. If that’s not possible, we follow environmental recommendations to recycle every part of the old device. Our clients use a lot of ink and toner. It would be easy to just throw it all away, but we believe in doing what’s best for our environment, not what’s easiest for business. All ink and toner sourced by WBM can be returned to the manufacturer for recycling using a simple process, but we’ve gone even further by joining with the HP & Mira Foundation Cartridge Recycling Program. As partners in this program, we encourage cartridge recycling by providing ink and toner recycling boxes to our customers. Any cartridge from any manufacturer can be placed in the boxes. When the boxes are full, we pick them up and send the cartridges off to be recycled. This program benefits both the environment and the Mira Foundation’s work to train service dogs for people with visual impairments, physical disabilities, and Autism Spectrum Disorder. We’re proud of our recycling program and our impact on shaping a more sustainable future for the IT community, but we’re not satisfied. We continue to look for ways to expand and improve our recycling efforts.
<urn:uuid:2dc858ea-039c-43dc-bdf1-d83fb4e29cce>
CC-MAIN-2022-40
https://www.wbm.ca/community/protecting-our-environment/recycling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00690.warc.gz
en
0.914516
408
2.71875
3
More than 2.5 quintillion bytes of data are created every single day, and that number is only going to grow. Forbes estimates that by 2020, 1.7 megabytes of data will be created every second for every person on earth. In the computing world, a host of hardware and software is available to store system data, but the two most basic storage options remain the solid-state drive (SSD) and the hard disk drive (HDD). Let’s break down the advantages and challenges of each across a variety of use cases. What is a Solid-State Drive? SSDs store information on microchips using flash memory similar to a USB stick, explains Greg Schulz, and founder of the StorageIO Group and author of several books on storage. This flash memory operates without any moving parts — on a solid state — and is known for faster data encoding and better battery life than other storage options. Optimized for faster boot times, SSDs are commonly found in mid-tier laptops such as the Dell Latitude 3390 2-in-1, which credits its high performance to its processor and SSD information storage. Older laptops, or those experiencing startup lags, can use an SSD to extend storage on any model. What is a Hard Disk Drive? HDDs are the most common form of computer storage, reports Lifewire, and work by encoding data using multiple spinning platters and a read/write head that is suspended above the rotating disk. HDDs are one of the least expensive ways to store large amounts of data.While cost efficient, consider how much data needs to be encoded before selecting the cheapest option, Schulz recommends. HDDs rely upon a platter to spin and write data. This hardware can slow over time, and its moving parts can fail altogether if dropped or damaged. SSD v. HDD: Density and Life Span Schulz explains that HDDs are optimized for higher storage capacity at a lower cost with good durability (many reads and writes over time). SSDs, however, are optimized for better performance (faster boot times), but with a lower capacity per cost. There are many classes of SSDs, Schulz explains. When shopping for an SSD, it’s important to understand the number of terabytes written (TBW). The more terabytes a device has, the larger it is, and the more cells available to read and write data. Smaller SSDs are optimized for very fast reads and writes, while others are optimized for higher capacity with limits on how many writes can be done over time. These attributes factor into how much data will be encoded, which affects overall life span. For HDDs, life span is influenced by wear and tear. Despite new “drop” sensors that attempt to protect the moving parts of an HDD from sudden jolts, one fall can render an HDD useless. At best, an HDD with minimal physical damage can expect to last five to six years, according to Lifehacker. To help predict a drive’s future, Schulz recommends looking at cost per capacity, interface and form factor, as well as TBW and metrics on drive writes per day (DWPD). SSD vs. HDD: Speed Zeus Kerravala, founder and principal analyst with ZK Research, conducted a test to understand the difference between SSDs and HDDs for Windows 10 performance. To ensure the test was comparing apples to apples and all conditions remained the same, he cloned his current HDD onto an SSD and then rebooted the computer to see how long it took the computer to arrive at the point where a user could begin work. With the HDD, the Windows desktop came up in about 90 seconds. When working with Windows, several processes continued to load after that initial startup. At about four minutes, he was able to open Microsoft Word and start typing — but even then, a check of Task Manager showed the drive performance was still at 75 percent. From a cold start, the total boot time for Windows 10 on an HDD was roughly five minutes. Kerravala then replaced the HDD with an SSD and ran the same processes. Without any mechanical parts to start up, the Windows desktop appeared in about 20 seconds. After 30 seconds, he was able to open Word, Google Chrome, Adobe Lightroom and other applications. SSD or HDD for Windows 10? Mainstream support for Windows 7 ended about four years ago, but the operating system’s official sunsetting date is early 2020, when extended support ends. To prepare for this shift, many businesses are looking at their future storage strategies. Windows 10, designed to constantly update its operating system, is the most data-heavy version of Windows to date. When considering which hardware is best for Windows 10, Schulz recommends identifying what takes priority: time or space. “SSD gives you performance for productivity — how quickly things get done from startup to shutdown to loading and saving files, images, videos, photos, documents,” Schulz says. “HDD will give you a lower cost. However, factor that in with lower productivity and more time spent waiting. Choosing Storage for Your Business Needs For large companies, Schulz recommends a hybrid storage system composed of both SSDs and HDDs. It’s easier, and more cost effective, to pair a high-cost, high-performance, high-durability SSD with a low-cost, high-capacity, bulk storage HDD that’s less write-intensive, Schulz explains. Smaller companies, Schulz adds, should purchase for scale. There may be a use case where SSD and HDD tiers can work together, but a little SSD in the right place can often do the job just as well, Schulz says. In fact, if a company can shuttle inactive data to a more cost-effective HDD, investing in an SSD will allow major system processes to run faster. The important thing, Kerravala adds, is to make an informed decision based on the needs of the company. SATA for SSD and HDD One of the most popular types of hard drive technology is SATA, or Serial Advanced Technology Attachment. This connection interface, created in 2003, allows SSDs and HDDs to communicate data with operating systems. SATA drives are guaranteed to work with nearly any computer, no matter its age. SATA SSDs work best for consumer-grade or small devices, and often fail when use needs grow. SATA HDDs are the most common form of HDD found in laptops, PCs and gaming devices. Compatible with nearly every computer operating system, SATA HDDs rarely suffer catastrophic failure, and instead fail gradually over time.
<urn:uuid:8473d1b4-bfe9-4848-be2b-90c120c4647e>
CC-MAIN-2022-40
https://biztechmagazine.com/article/2019/07/ssd-vs-hdd-which-right-choice-business-perfcon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00690.warc.gz
en
0.938993
1,385
3.296875
3
As U.S. organizations become more cognizant about the need to protect the data they collect, they learn how poor the guidelines, laws, and compliance methods are. The U.S. laws are more sectoral. If you really want to adopt a holistic data privacy standard that protects your users — and your organization — look more to the European Union for useful guidance. Enterprise organizations today are very aware of the importance of IT security. Plenty of data breaches, security vulnerabilities, hacking attempts to compromise systems, and software development defects have driven home that message. As a result, most IT organizations have put practices into place to protect the organization from “bad guys” who are trying to break in. However, only recently have most large corporations (and small ones, too) put attention on data privacy: the responsibility for the organization to control access to personally identifiable information (PII) that is collected during the process of doing business. The Sectoral (and Sometimes Conflicting) Approach in the U.S. Healthcare organizations are perhaps most conscious of the data privacy issue as part of their need to protect users’ information for confidentiality, integrity, and availability. Even without the influence of corporate or personal ethics, HIPAA legislation has driven healthcare IT to comply with now-well-established practices. Financial services organizations have similar motivations. However, other industries (particularly in the U.S.) are just catching up with designing systems for data privacy. Many of their IT personnel are getting confused, and for good reason. Overall this is a complicated but important mess. The existing data privacy rules are inadequate, overwhelming, open to interpretation, and contradictory. For instance, the U.S. supported Safe Harbor law, aimed at “prohibit[ing] the transfer of personal data to non-European Union countries that do not meet the European Union (EU) ‘adequacy’ standard for privacy protection,” which directs US companies to deal with EU data separately. However, the Safe Harbor program conflicts with the U.S. Patriot act. Moreover, the distinction between security and privacy sometimes is unclear. Security is about access, which — when it fails — permits a loss of information. Privacy is about data exposure at the boundaries: how much you share, knowingly and unknowingly, and who can get their hands on that data. Understanding Data Privacy To understand the difference, it might help to think of a family that owns several cars. Instead of the parents and teenagers each having a key to each car in the driveway, the family might employ a key rack in the entry hallway. Anyone who can get inside the home perimeter — the front door — can access the car keys, which are meant for the family members to use. Security prevents people from getting into the house, but the parents need to think about what information is stored and exposed once someone walks inside. So organizations need to consider questions including: - What information is the company acquiring, and how it is controlling who can access it? - If the intended person gets access to the data, what can he do with it? - What business processes need to be put into place to ensure that the right people can touch personally-identifiable information, and that the wrong people cannot? - What industry laws affect these decisions? Which should? …and that is just to start with. The EU Data Protection Guidelines Instead of looking at the U.S. rules and policies to guide you in designing data privacy practices, I urge you to rely on the policies developed by the European Union (EU). Frankly, the U.S. has pretty low standards of privacy — and not just at a consumer level, such as Facebook. Microsoft’s new Outlook apps were blocked by the EU over privacy concerns, as just one recent business example. One key advantage of the EU guidelines is its notion of separation between data processor and data controller. That is, it puts attention on: - Who has authority to process information, versus - Who may decide who can process information. That’s an important distinction which affects accountability, security, and more. For instance, consider the role of an email admin: In some cases the admin might have to look at a user’s email; and in other cases she absolutely should not do so. Who decides whether and when that admin should look at the email logs? Surely it should not be the admin herself. With the EU notion of data privacy, there are separate rules and task forces. Control and process are very different. The person who designs cannot create the rules. The Druva Way At Druva, we treat the European directive as a gold standard, and we build our software to comply with those goals. (So is Microsoft, incidentally: See Microsoft Adopts Cloud Privacy Standard.) We already do several things that way and are committed to do more. This isn’t the easiest way to conduct business, but it’s the right way. For example, the EU directives influenced the way we build encryption, based on shared responsibility. Most companies either build encryption with the key in the cloud (so more people have access) or at end user (so the key is completely exposed). Either way, admins have more access than they should. We follow the EU provider/admin model in several ways. We build detailed audit trails for just about everything, such as when a file is backed up. The scope for accessing those logs is limited by two things: detailed admin rights and departmental policies. The high level admin doesn’t have a lot of access, and the low level admin doesn’t have a lot of control. We also follow security policies that primary address threat protection, though they have an effect on privacy. Among them are geo-fencing and IP-based access control. Such features ensure that, say, German data cannot be accessed in Taiwan. Data privacy is important because breaching data is expensive. Where every action triggers a chain reaction of tons of data, there’s a corporate responsibility to make sure it’s protected. At Druva, we’re doing our best to make that job easier for our customers by building software that complies with the best-quality data privacy initiatives. Concerned about data privacy in your organization? Read our white paper, Preparing for The New World of Data Privacy: What Global Enterprises Need to Know.
<urn:uuid:015ba4d5-51e8-40ef-9d73-2cc293e48f2f>
CC-MAIN-2022-40
https://www.druva.com/blog/user-data-privacy-think-globally-act-locally/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00690.warc.gz
en
0.948609
1,325
2.625
3
Demystifying Identity and Access Controls Differences: Identity and access differ in that one is about who you are and the others about what you’re allowed to see or do. If you can prove your identity, that’s called being authenticated. If you can prove you have a right to access something, that’s called being authorized. When we talk about authentication and authorization, these two steps can seem like they’ reoccurring at the same time, but they are totally separate processes. As software has moved to the cloud, security technologies have become more sophisticated and these twosteps have become even more separate. Role-based access control, or RBAC for short, is like a barrier between the user and the authorizer. Roles are assigned to users and users can switch between the different roles so that when someone wants to carry out a task, the Authorizer will either allow or deny the task based on her current role. Privileged Identity Management (PIM) is a very broad industry term rather than a reference to any specific tools. Many analysts, most notably Forrester, use the term ‘PIM’ to refer to all things within the ‘PAM’, or Privileged Access Management, space. PIM and PAM are often used interchangeably to refer to the wider universe of tools and technology that relate to the management, governance, auditing, and lifecycles of all types of privileged access and privileged user credentials. WHY USE PIM / PAM With the clarification on terminology aside, any explanation of the Azure PIM tool must first start with a deeper look at Microsoft’s position on identity security. The idea of identity as a security perimeter is meant to act in lieu of traditional controls placed on the network. The Question then is, if you can be more certain that the user accessing a resource is who they say they are, then do you need other controls in place? We view this initiative as a natural evolution of Active Directory’s very basic approach to securing user identities (i.e. just a username and a password). Azure AD takes this to the next level with core features such as: Enhanced MFA, including Windows Hello for Business and MSFT Authenticator, Conditional Access: the foundational layer of identity security in Azure AD, Azure PIM and Azure Identity Governance, Automated Threat Response and Cloud Intelligence solutions SIEM, Sentinel, and ATP. Of these features, Conditional Access is by far the most pervasive feature in the identity stack. If you or your organization leverages Office 365 or Azure AD today, you have most likely worked with, or been subject to, Conditional Access policies. This feature is where Azure AD administrators can define granular, context-based policies which challenge users for things like additional authentication (MFA) depending on where they are trying to authenticate from, or which resources they are trying to authenticate to. With Conditional Access, you can block connections from locations (perhaps from a country your business), force users to reauthenticate when their connection changes, allow connections seamlessly from managed devices that are compliant with the latest security policies, or many other conditions that administrators can set. The key to understand for the purpose of de-mystifying Azure PIM is that Conditional Access is an identity security tool that applies to everyone in your organization. Azure PIM then becomes a much more targeted tool in a broad set of capabilities that make up the security policy push for ‘identity as security’. How Azure PIM Works Unlike Conditional Access, Azure PIM only applies to administrative roles within Azure and Azure AD. This is an important consideration, both as it relates to ‘administrative’ functions as well as, more importantly, the idea of Azure and Azure AD ‘roles’. Also, unlike Conditional Access, Azure PIM requires Microsoft’s highest license tiers for users that are subject to the tool. A distilled-down way to describe Azure PIM is that it’s a clever provisioning and deprovisioning utility wrapped around Azure AD and Azure resources to allow for time-bound, or Just in Time (JIT) access instead of standing access. A good way to understand Azure PIM is to think of what it is coming from: Active Directory Security Groups. In AD, the approach to manage a Security Group is to simply add a user in when they need the privileges associated with that group. If it is a temporary need, then, hopefully, someone remembers to go in later and remove them. This is a accident prone no-frills, manual process that is fraught with opportunities to make a mistake and leave the user as a permanent member of the group. In addition, the user has all the rights and privileges associated with that group 100% of the time while they are a member, even though they may only need those enhanced privileges for a small amount of time. This is the concept of ‘standing access’ that Azure PIM attempts to mitigate. Azure PIM takes this model and evolves it; the Azure PIM utility within the Azure portal allows you to assign users or groups within Azure AD to become ‘eligible’ for various roles. Eligibility essentially means the user may not have these privileges all the time, but rather for a short period when they opt-in, or ‘activate’ their roles. A new set of audit records now exist where they did not before in Active Directory; you can now view who activated roles and when. Email alerts are available here as well to fire off notifications when certain roles are activated and deactivated. Administrators and users of Azure PIM must access and work within the Azure Portal. Administrators can select users or groups and define their eligibility criteria, such as which specific role and the time period that it applies to. Administrators decide which roles the assignment pertains to, followed by the time options for the assignment. Permanent eligibility is enabled by default and can be changed to target a specific date range. On the other side, users navigate to the ‘Azure PIM blade’ within the portal and find their eligible assignments to start the activation process. If successful, the portal will force a refresh once the role privileges are successfully applied. For anyone who works within Active Directory can attest, this is a big improvement that Azure PIM offers as a utility to manage role membership. What exists outside of Azure in your environment? This is the broader question that I believe is worth asking in a planning phase when considering Azure PIM: what in your environment will exist outside of the Microsoft ecosystem, both now and in the future? Consider that an environment where multiple solutions exist to solve the same problem can often add complexity that a consolidated solution would inherently avoid. In the case of Azure PIM, the guiding principle to understand, if multiple tools might be required, will it be based on what is or is not part of Azure. Think about your own environment: - What does your workstation environment look like? Are they all Azure AD-joined? What about macOS or Linux devices? - On the server side, are these devices in Azure? - Think of data warehousing – which database platforms do you use? How is privileged access managed for these systems? - SaaS or 3rd-party applications? Can these be integrated into Azure? - Network switches? Hypervisors? Firewalls? Load Balancers? VPN infrastructure? - What about 3rd-parties that need remote access into any of these systems? How do you secure that access? - Is identity the end-all-be-all of security for you? Or just one part of a wider PIM/PAM strategy? - Other cloud providers : AWS, GCP, Oracle, etc., how do you plan to unify processes and technology across these platforms? Azures PIM’s scope is bound to Azure, but your privileged access management controls should extend to your entire environment, on-premises, mutli-cloud, etc. When answering the question of how effective Azure PIM might be for you, I’d ask you to balance what you know about your environment and where gaps might exist, as those gaps would have to be filled by another point solution. It is fairly easy to see where Azure PIM provides value, but it’s important to be mindful of mapping Azure PIM’s real capabilities against what a comprehensive PAM strategy in your organization need to look like.
<urn:uuid:1f3c3f00-4b7f-4069-a9d7-3ac31dd4e7ef>
CC-MAIN-2022-40
https://www.ciocoverage.com/identity-vs-access-demystifying-identity-and-access-controls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00690.warc.gz
en
0.943674
1,776
2.515625
3
The shift towards an increasingly digital world has become overwhelmingly apparent. The coronavirus era has forced a technological leap on all fronts, and incumbent technologies are struggling to hold back a deluge of fraud and cybercrime. Between the need for secure access to digital services and the demand for increased security, the case for a trusted and verifiable ID system has never been stronger. However, one question remains: Who gets to implement this system? If the answer is a company or a government, then concerns around personal privacy and civil liberties become quite relevant. Fortunately, this type of system can be implemented in a way that is decentralized, secure, and protects user data. And it’s this type of model that stands to best benefit both consumers and businesses. The need for digital IDs in the modern world Per data from the World Bank, nearly one billion people worldwide lack officially recognized ID – barring their entrance to the digital economy. This issue is even more pressing given that most of these people live in low-income countries. No ID means no access to financial services, travel, healthcare, and even some government services. It’s not just developing countries that stand to benefit from digital ID. From online payments and delivery to account verification, identity remains one of the most critical enablers in accessing a myriad of services. However, for many, entrusting their personal information to a centralized digital ID system spells trouble. Such a system might make acquiescing to ID checks a daily occurrence for a handful of mundane tasks, dropping digital breadcrumbs of our every move. After all, big tech and governments have repeatedly shown they cannot be trusted with this type of information or control over such a system. You only need to look at what happened with Apple’s attempt to digitize traditional IDs in their Apple Wallet. The US government relinquished control to Apple, allowing the company to dominate the digital ID rollout and even foot the bill to the taxpayer. For privacy advocates, Apple’s growing monopoly and an increasing reliance on the tech firm could spell disaster for personal freedoms. For one, the agreement stipulates that Apple retains control over which devices are compatible, excluding those without an Apple product. Moreover, placing valuable data in the hands of any centralized company — no matter how strong their encryption is — presents an inherent data risk. Consigning a private company with ultimate control over a digital ID system and our most personal data is less than ideal. But we needn’t write digital IDs off just yet. Self-sovereign ID is key to preserving privacy When backed by biometric data and enforced using the power of decentralized blockchain ledgers, digital IDs stand to address many of the lingering issues with traditional ID documents. Already, various forms of fraud and illicit access are occurring more frequently as consumers spend more time and money online. Leveraging these new technologies can deliver IDs that are impossible to fake or manipulate, and they can be designed to interoperate with new systems. When implemented correctly, decentralized digital IDs can make it harder to infringe upon civil liberties and privacy. That said, it’s essential that these IDs are not federated or corporatized but are, instead, self-sovereign identities, fully controlled by the end-user — made entirely possible by blockchain’s trustless verification. Decentralized digital IDs are supported by a wide range of emerging technologies and techniques, leading to the creation of a truly Self-Sovereign ID, or SSI — where users hold full control over their personal data. This includes zero-knowledge proofs, a system that allows one party to verify data to another party without revealing any pertinent information, which ensures that personal information never has to be revealed or retained by third-party verifiers. Having self-sovereign identities linked to purchases and payment rails will facilitate trustless trade that also seamlessly can stay in line with regulatory expectations. Better yet, most of this upgrade would be on a software level. Much of the existing infrastructure could be modified, and personal mobile devices could act as the point of access for data entry or retrieval. This model can give any company a competitive advantage against older and more problematic designs. Users will feel safer, and less intruded upon — a sentiment that should go a long way in the modern era that has seen rampant abuse of data collection practices. Ultimately, with the shift to a digital world, the problems of physical ID systems need to be addressed. They can be addressed in multiple ways, but everyone should be wary of any solution that is controlled and monitored by businesses or governments. The potential for abuse is far too high. Blockchain tech and mobile devices already offer us the tools we need to create SSI that are at least as secure, arguably more so, than any centralized solution. This will benefit not only the consumers themselves but also the companies that have the foresight to implement these systems sooner than their competitors.
<urn:uuid:f0ff20be-1d42-4896-b049-2444e07a5442>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/12/17/digital-ids/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00690.warc.gz
en
0.937926
1,007
2.765625
3
As hubs of research, academia and culture, colleges have been targets of hackers on more than one occasion in the past few years. In 2014, one university reported a data breach that exposed the personally identifiable information of hundreds of thousands of students and staff, dating back to 1998. This information was also tied to the institution’s ID card system. While it may not be realistic for colleges to preempt every attack vector that could lead to a breach, it is possible to foster stronger cybersecurity through the implementation of more secure access cards, which help guard against threats in the physical world. Access cards can be printed with high-quality designs to ensure their integrity, and also integrated with IT systems so that their data is always up-to-date. Protect Students, Faculty and Staff with Stronger ID Cards Colleges and universities, as well as many primary and secondary schools, have issued IDs and access cards to their students, faculty and staff for decades. In recent years, however, the functionality of these cards has greatly expanded. A card might now be used for anything from checking out a book at the campus library to purchasing a meal at a dining hall. For example, several years ago, Mount Mercy University began charging students for any pages they printed above a set limit, by using their ID cards. Paper use declined by around 40 percent after this change. However, the expansion of these cards’ capabilities comes with some new risks: - For starters, a stolen card could grant an unauthorized person vast privileges for accessing school services and getting into restricted areas such as dormitories and labs. - There is the additional issue of maintaining a robust, integrated ID card system as university budgets, especially for public schools, come under pressure from cost-cutting state legislatures. - Finally, the ideal is to unify all essential functions on a single up-to-date card, but the reality is often that they are divided among disconnected card systems that are difficult to maintain and reconcile. ID card systems themselves have also become magnets for attacks. The university breach mentioned earlier was of a database for its ID cards, demonstrating the value of the data – names, dates of birth, Social Security numbers, etc. – contained in them. Ultimately, around 300,000 records were breached in that incident. What is the best way to protect the sensitive, personally identifiable information on ID cards while ensuring the integrity of the university-wide access system? Any solution should start with a more secure physical card. Card design and printing Quality design and printing are the building blocks of a secure ID card. Important features may include signature capture, DSLR camera support (for ID photos), magnetic stripes, barcodes and QR codes. Together, these features enable a trusted identity that can be verified with a glance if necessary. The information and imagery on the card are consistent with university branding and students’ records. Seamless integration with IT systems The card itself is one part of a larger security apparatus. It may require connections to various databases (Microsoft Access, Oracle, etc.), video surveillance systems and human resources portals. Such integration is essential for producing ID cards with accurate information. The Spring Lake Park School District in St. Paul, Minn., realized this when it overhauled its ID system several years ago using solutions from Entrust and IdentiSys. The school district needed to connect the new card production process to its Infinite Campus student information platform. It ultimately was able to pull all essential student data – including name, grade level and personal identification number – from Infinite Campus and relay it to a Entrust® CD800 card printer. An ID card could then be printed with these details along with the most recent photo of the student contained in the database. Steve Halvorson, the district technology coordinator for Spring Lake Park Schools when the new system was implemented, described the setup from Entrust and IdentiSys as “a highly reliable hardware and software badge solution that is integrated with our staff and student information system, building card access programs and video security system.” The school district saw increased awareness and security in its buildings following the ID card upgrade. Guarding Against Threats in the Physical World Schools are right to be conscious of physical security matters, which sometimes get overlooked as more attention and resources are diverted instead to issues such as malware infections and distributed denial-of-service attacks. Building access must be properly managed, and secure ID cards are integral to this process. Combined with tight integration with databases and other security systems, ID cards promote safety. Having a way to easily assign and verify identities on campus will go a long way toward mitigating many of the security challenges currently facing education.
<urn:uuid:abe11d3f-6394-43df-8320-2efa5df0595e>
CC-MAIN-2022-40
https://www.entrust.com/es/blog/2016/07/schools-can-step-up-their-security-game-with-more-secure-id-cards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00690.warc.gz
en
0.95633
956
2.546875
3
By Kaitlin Krull Twenty-first-century life is dictated by technology in practically every way. From industry to education, from science to the arts, it’s ever present. One of the most fascinating things about technology is that it is always developing—and home technology is no different. By the time we wrap our minds around the latest gadgets and products for our homes, scientific advancements change the game. One of the most fascinating things about technology is that it is always developing—and home technology is no different. Although we can’t predict the future, over at Modernize we have a few ideas as to how exactly home technology is going to alter the course of our lives and change the future. Generations to come will never know a world without the Internet and constant connectivity. Current home technology products such as WiFi and Bluetooth are commonplace in virtually all developed countries, and these technologies are only going to expand and improve. By the time we have retired, connectivity will not be an issue for anyone. All of our home electronics (and appliances, and machinery) will be linked all the time, anywhere, automatically. Home automation is something that most of us thought was only possible in cartoons like the Jetsons, but it’s now a reality for many homes already. A thing of the present rather than the future, smart home automation technologies allow virtually your entire home to be controlled remotely via panels, smartphones, and other devices. As connectivity increases, these devices will be standard in new homes and markedly improved in time. Most people would argue that the point of technology is to make life simpler. While millennials are criticized for wanting everything to be available to them instantly, technology is really to blame here. Smartphone apps in particular, give us the opportunity to learn, communicate, purchase, and do anything else we might possibly need to do, instantly.Not too far in the future, everything will become available to all of us at home with the touch of a button. Smartphone apps in particular, give us the opportunity to learn, communicate, purchase, and do anything else we might possibly need to do, instantly. Coffee? Clothes? Shower? Phone? Ride to work? Done. Now. Decreased energy use Smart energy products such as thermostats are taking off in energy conscious (and technologically advanced) countries throughout the world because they can monitor, track, and adjust your home’s energy use with minimal effort from you. But these kinds of products are just the beginning. We imagine that technologies that tell us exactly when to turn off appliances in order to make our homes as efficient as possible are not far around the corner. Furthermore, developing technologies behind renewable energy sources will decrease our carbon footprint even further than they are now, saving us tons of energy (and money). Current futuristic technologies While we can’t possibly know for sure whether our technological predictions will come to fruition, there are quite a few current home products already out there that give us a glimpse into our future. Hydroflooring, smart glass, 3D televisions, and giant touch screen coffee tables are just a few of the current gadgets that make us think that our future home lives might not actually be that far removed from the Jetsons (or even Iron Man, for that matter). In all seriousness, these products point to a technological future of self-sufficiency for homes everywhere.
<urn:uuid:d89f7737-ce4d-4b77-ba77-63688ea45a9c>
CC-MAIN-2022-40
https://10fold.com/how-home-technology-will-change-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00690.warc.gz
en
0.950049
709
2.546875
3
New research could also mark a milestone for a sophisticated device that already resides beyond Earth. Instruments for a future-facing experiment to demonstrate 3D printing—in space, using material simulating dust found on the moon or other planetary bodies—are set to go to the International Space Station from Virginia on Northrop Grumman’s 16th commercial resupply mission for NASA next month. Through the Redwire Regolith Print study, researchers will test the feasibility of using onsite, space-based raw resources to build on-demand landing pads, habitats and more during future exploration missions. Regolith refers to the blanket of loose rocks and soil on a solid surface—Earth, or elsewhere. “It will be the first time 3D printing with a regolith material has been demonstrated in microgravity,” Redwire Aerospace Engineer Howie Schulman told Nextgov in an email Friday. “I view it as game-changing because it is the necessary stepping stone to finally creating the flight-ready construction equipment that will enable our continuous habitation on the lunar surface.” Schulman is a project lead on the effort. He explained that NASA fostered the development of regolith 3D printing by hosting a centennial challenge for additively manufacturing habitats, which kicked off in 2015. The space agency has since continued to invest in this research and development as it supports the long-term goals of the Artemis program—namely, maintaining a continued human presence on the lunar surface. “This study was spurred by the need to build the large pieces of infrastructure on the moon necessary for permanent settlement,” Schulman said. Through the project, researchers aim to prove and assess 3D printing with a regolith-laden simulant material in a place with continuous microgravity. The material to be used in the experiment is a “specially developed feedstock optimized for compressive strength,” according to Schulman. It’s composed of a polymer binder and the lunar regolith simulant JSC-1A. Schulman confirmed that “NASA preferred this specific regolith simulant and provided it to Redwire for use on the mission.” Three prints are anticipated to be performed in the experiment, and each will produce a unique solid slab. Those slabs will then be downmassed and delivered to NASA scientists who will study the 3D-printed materials. Some of the hardware used to complete the 3D prints will also come back to humans’ home planet for review. One objective of the on-orbit prints, Schulman noted, is to demonstrate the efficacy of the experimental manufacturing process. A manufacturing device produced by Redwire-owned company Made In Space is already aboard the ISS and will perform much of the needed capabilities, NASA’s site notes. Extruders and printbeds made explicitly for regolith-based materials will be integrated with that advanced printer. “The RRP mission is sending experimental swappable parts for use inside the Additive Manufacturing Facility, the first commercial 3D printer to operate in space,” Schulman further explained. This space printer was “designed with future experiments in mind” and built to have modularity specifically so that it could serve as an on-orbit testbed for new manufacturing technology development. “When ISS crew install the regolith printing modules, it will be the first time AMF is used to perform novel manufacturing technology research in space other than the original type of 3D printing it was configured for on launch,” he said. “This is a milestone for the printer, and hopefully the first of many new manufacturing research campaigns.” On Earth, 3D printing with regolith simulant does occur. Still, the technology readiness level must rise to progress technique toward deployment on the lunar surface, Schulman noted. And by performing this demonstration in a microgravity environment, he added, “confidence will be raised that the concept is feasible for environments with gravities weaker than Earth's, particularly the moon and Mars.” NASA has big plans to explore new realms of both of those places in the near term, and this research could help make sense of building quick structures with resources on the ground—when that time comes. Expanding on what this research could ultimately pave the way for down the line, Schulman said Redwire views landing pads and roads as some of the “starting pieces of infrastructure” needed to enable more advanced construction around in-space environments. “Once a lunar outpost has a solid, level touchdown site and roads to transport supplies to and from that landing site, more heavy machinery can be delivered,” he added. “Therefore, landing pads and roads will be among the first utilities to be printed by the pioneer manufacturing devices.” Those landing pads will be designed to support the weight of large-scale landers while also managing rocket exhaust to minimize dust disturbance. Once the transportation infrastructure is in place, Schulman added, printing machines can be delivered and deployed throughout a planned outpost to create structural foundations, walls and support pillars. “These types of installations will be used to set up solar panel grids, fuel and oxygen tank farms, and habitats,” he said.
<urn:uuid:265ec6c5-5233-4f7c-8d02-98f8265eff8b>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2021/07/how-3d-printing-moonrock-material-could-lead-new-space-habitats/183649/?oref=d1-related-article
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00690.warc.gz
en
0.929675
1,094
3.71875
4
Eliminate an entire vulnerability class from your web server in less than an hour As a hacker and bug hunter, one of my favorite bugs to find is information disclosure via forced browsing. This is when you can find sensitive data (such as credentials or secret keys) in a file not referenced by the application. The most common way these vulnerabilities are found is by using content discovery tools like fuzzers. Information disclosure by forced browsing occurs for a variety of reasons. I’ve found entire webroots in zip files, exposed git folders, and authentication bypasses. Perhaps a zip of the webroot was created to serve as a backup. A copy may have been left on the server. This is a big deal since many servers run php or asp code, which is executed server side. The credentials in those files normally can’t be accessed via the browser because it is executed server-side before being sent to the browser. However, downloading them as a zip means all the files can be read. Git folders are often left behind because deploying servers or features by simply cloning the repo is quite easy. With these on a webserver, unless there is a config which blocks access to this folder, it’s now exposed to the internet. One final example would be when there is authentication which is not enforced on all endpoints. By browsing directly to endpoints without an authentication check, unintended access can be achieved. The fix in theory I want to explain the fix at a high level before sharing commands and code because many organizations will have their own way to implement this process. Also, any non-technical readers can have a clean break at the end of this section and still have a practical step-by-step requirements list to pass to their security team. Here’s the overview of how to drastically reduce exposures on your webservers. This will be performed for each server: - Gather all paths to all content on the web server (where the website resides) - Make requests from an external perspective (simulating a normal visitor) for each of those paths. This will likely be done via code or a tool. - Go through the output to determine if any of the exposed files or paths are sensitive and should not be exposed to the internet. - Remove any exposed files or folders. Add authentication to any pages which were lacking it. If your website has a login process that is for customers or external users, this same process would be performed from an authenticated perspective as well. The fix in practice Some practicalities that make the above theory a little more difficult to apply at scale are: - There might be a lack of knowledge on how to output all the files in a useable format - Browsing to hundreds of files is time-consuming - Many “servers” are reverse-proxied into specific paths on a host. - Files are added and modified on the server all the time So here are the solutions: - Gathering paths involves getting routes and listing files. Frameworks like rails have a routescommand that will display all routes. Laravel has a route:listcommand. If you are using a framework like these, look up the command to list the paths and combine them with the following instructions. To list all files in a nice format (in bash/zsh), change to the webroot directory of your server and run this command to save all file paths to find . -name "*" -print | cut -d/ -f2- > all_files.txt - Now we will use that list to fuzz the web server for those files. I prefer ffuf for this task but feroxbuster, dirsearch, dirb, and others will also work: ffuf -c -u https://example.com/FUZZ -w all_files.txt -t 5 -mc 200 -ac -o output.csv -H "User-agent: Mozilla/5.0 Security Testing" Since this post is geared towards defensive security folks and ffuf is more of an offensive tool, I’ll break down the command. -c is just colorized output, which is nice. -u is for the input url. The FUZZ keyword is what will be replaced by the lines in the wordlist for each request. -w is for wordlist. -t is for the number of threads to use. By default, ffuf is relatively fast using 40 threads. Toning that down will help reduce the likelihood of getting blocked by the WAF. -mc 200 match all 200 response codes. “Match” here meaning display it on the output and put it in the output file. -ac tells ffuf to “autocalibrate”. Some servers respond with a 200 that says “not found” rather than a 404. Also, other websites just show the main page if you pass a path that doesn’t exist. Autocalibrate tests the server with paths that shouldn’t exist, builds a baseline, and doesn’t show or save those results. -o outputs to the following filename. -H is for any header. We are using it to set a custom user agent for traffic tagging, as well as making sure we aren’t using the default ffuf agent in case it’s blocked by the WAF. - If the server you are securing is proxied to a specific path, simply add that path to the ffuf command. For example, if the app is built and proxied to /app/ then change it to be /app/FUZZ in the command above (and in the script below) - We can continuously monitor new files, fuzz for them, and alert developers or engineers with a simple script. I’ve put an example script below with in-line notes explaining each command. It’s fully plug and play if you follow the instructions. Setup should only take a few minutes. This can be set to run regularly via a cron job. I also put it in a gist: https://gist.github.com/jthack/ba2c5a1061a913a5c698b9e2b152a362 # Before the first run, read the comments and change the script # for your company # Install ffuf with `go install github.com/ffuf/[email protected]` # Install anew with `go install -v github.com/tomnomnom/[email protected]` # Change the WEBROOT variable below to the location of the webroot WEBROOT=/var/www/html/CHANGE/ME # This changes to the webroot directory cd $WEBROOT # This makes a directory for storing the files used for this script. # Change it to be whatever path you want. PROJPATH=/home/changeme/project mkdir -p $PROJPATH # This finds all the files and writes their paths to the file find . -name "*" -print | cut -d/ -f2- > $PROJPATH/all_files.txt # Change to projpath director cd $PROJPATH # This fuzzes for all the files and matches 200 response code and # saves output in the file. You can use ffuf's other nice outputs # if desired but it will break the rest of the script. ffuf -c -u https://example.com/FUZZ -w all_files.txt -mc 200 -ac -o output.csv # The first run of this will put all exposed paths into the file. # After that, only newly exposed files will be output. cat output.csv | cut -d, -f2 | anew all_exposed_paths.txt # You can now review all_exposed_paths.txt or output.csv or to make # sure nothing is exposed that shouldn't be. Due to the way "anew" works, # this will only show you the new paths each time it is ran. # Alternatively (what I recommend) is to pipe the output of the previous # command into a slack or discord hook. If you do that, comment out the # last line and use this one. As mentioned before, due to anew, you will # only be pinged/alerted when there is a NEW 200 response path. # cat output.csv | cut -d, -f2 | anew all_exposed_paths.txt | to_slack # Here's my tool I use to send messages to slack (it's only 3 lines of # code): https://github.com/jthack/toslack Your company will be significantly more secure if you regularly scan and review all the servers you own with a script like the one above. Follow up with me If you’re a security engineer who does this and finds some sensitive stuff exposed, I would LOVE to hear about how this helped. Please tweet or dm me the story! If you’re a company who doesn’t fully understand this post, feel free to reach out with any questions! If you’re a company with a bug bounty program and want to outsource the reviewing-of-the-output aspect, do the first step and send me a list of all the paths! I’ll review it and report anything sensitive to your bug bounty program! This strategy may not work for: - Domains which point at something third party like a SaaS product (unless you’re using an on-prem version). - Sites which get “built” before deploying like jekyll does. Many of the resource paths are dynamically built at run-time before deployment so simply listing files and directories won’t get all the paths needed. It’s definitely still worth doing though as you’ll catch some things. - Large-scale companies with hundreds or thousands of domains might find this process difficult. That said, focusing on your core domains would be a great place to start, and the provided script could be pushed via salt/other tools to automate it at scale. I really hope you enjoyed this article. I’d appreciate it if you shared the post with anyone you think would find it useful. If you’re curious, all the images are AI-generated with Midjourney. I post a lot of AI-generated hacker art on twitter.
<urn:uuid:8fd8b1eb-28b1-48b5-b685-ff8ef2c458bc>
CC-MAIN-2022-40
http://rez0.blog/cybersecurity/2022/09/22/prevent-info-disclosures.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00690.warc.gz
en
0.913608
2,215
2.53125
3
Drawing on the Data Provided by the Human Body to Maximize Performance Endurance sports are defined as sports that involve sustained physical activity over a long period. The most common examples that come to mind are long-distance running, cycling, cross country skiing, hiking, and the combined swimming, running, and biking of a long-course triathlon. The main differentiator in endurance sports from strength sports is the lower average intensity during the activity. This is intrinsic to the duration and distances of the sports themselves – you can probably sprint 100 meters at maximum exertion, but you can’t sustain that same pace over the 26.2 miles of a marathon! As is the case in every sport, how well you do in endurance sports depends on many different factors. A few examples include: how strong you are, what kind of shoes you’re wearing, the quality of your technique, etc. However, unlike other sports, endurance sports apply those factors over very long distances and times. When stretched out over a long distance or duration, the factors that affect performance are multiplied and magnified. If we go back to our sprint vs. marathon example – if your running shoes are too small for your feet, you can probably still sprint for 100 meters; try running in those shoes over 26.2 miles, though, and it will be an entirely different story. Therefore, success in endurance sports is driven by being able to tightly control for and then optimize all of the factors, large or small, that could improve or detract from your performance over time. Take cycling as another example: some significant factors in cycling are the weight of your bike frame and choices in what gear to wear. In contrast, small factors could be things like the aerodynamic gains from shaving your legs before a race and the viscosity of the bike lube you apply to your bike chain. As David Brailsford (former performance director of British Cycling) declared in his theory called The Aggregation of Marginal Gains: “small improvements, when added together and applied over time, lead to significant increases.“ With an ever-present need to understand, monitor, and then control the wide range of factors that could influence performance, the Internet of Things (IoT) is positioned perfectly to bring in a new era of endurance sports. The Problem: Costly, Slow, and Disparate Data Collection Practices Impede Progress For many endurance athletes, especially those just starting, training and performance are measured mainly by feeling (e.g., “how hard was that effort on a scale from 1-10?”). Over time, the basic technology has been developed to collect more data around measuring physical performance – with heart rate monitors being the most common – which has opened a whole new window into training insights. However, up until recently, there hadn’t been major changes to the technology itself. Looking back, the science of improvement in endurance sport has been hampered by the processes of data measurement being: - Expensive or Unavailable: The necessary hardware, equipment, and sensors to make key performance decisions were either too expensive or simply didn’t exist, making it difficult for even professional athletes to access them. - Delayed: Users could only review their data and metrics after they were downloaded from the device and uploaded to a computer, making it difficult to get timely feedback and nearly impossible to make real-time decisions. - Inactionable: Data from multiple sources and sensors were presented to the user each in their own app, files, or software silos, without any synthesis or recommendations. With so many factors that could affect performance and no place to aggregate them all, many athletes were left to attempt improvement via trial-and-error, or were overwhelmed by what to action to take next. IoT is relatively nascent in the endurance sports industry. Limitations on battery life, connectivity, and wearability have made it difficult to effectively harness the full benefits of IoT as it relates to endurance sports. The Solution: Fully Connected, Long-Lasting Sensors Deliver Insightful Data If you’re into running, you may already use an Apple Watch or Garmin to track your route with GPS, monitor your heart rate over ANT+, and stream music to your headphones using Bluetooth. Recent advances and the maturity of IoT across both enterprise and consumer spaces mean that athletes have access to the necessary technology to make game-changing decisions and elevate their performance for the first time. With IoT advancements, endurance sports have become: - Fully connected with a suite of sensors: Improved battery technology, more accurate sensors, and greater portability means that a wide range of sensors are available to monitor previously unmonitorable things. Examples include Velocomp’s AeroPod to measure aerodynamic drag, and Stryd’s power meter to measure running power. - Real-time: Wireless, power-efficient connectivity means that data from sensors are able to reliably and steadily transmit data to athletes in real-time. This means that athletes can view current data and make decisions that impact their performance mid-race, which can in turn influence whether a race is won or lost. An example is Supersapiens’ blood glucose monitor that performs real-time blood glucose measurmeents and reports it on a phone or Garmin watch to the athlete. - Actionable: IoT platforms and software applications aggregate data from multiple sensors and sources into a centralized location; the platform can then synthesize that data to create charts, graphs, and other visualizations for viewing, identify performance trends, and make recommendations for future training sessions, recovery time, and next steps. Looking Ahead: Portability and Maximum Battery Life are Top Priorities The type of IoT sensors and hardware used in endurance sports run the gamut from basic heart rate straps to flashy, ruggedized GPS watches. Two things they all share in common are wireless connectivity and maximized battery life. You don’t want to deal with wires impeding your movement or to be forced to plug in to charge while you’re out exercising for hours. For connectivity, most sensors do not connect directly to the cloud to maximize battery life – they typically transmit to a phone, watch, or cycling head unit via Bluetooth or ANT+. The nearby telephone, watch, or head unit may connect to the cloud via cellular LTE to send that data to an online platform but does so asynchronously from the sensor’s data collection. This makes the data for the sensors immediately available for viewing without a significant battery expenditure. Once that data is available on the cloud, IoT platforms like Leverege or sport-specific software like TrainingPeaks will store and aggregate that data, analyze it, and present it in several helpful views. Users can interact with the software after their workout or race to understand how they performed and get data-driven recommendations to improve.
<urn:uuid:619fbe83-6f69-4b71-a183-f8b22f18429c>
CC-MAIN-2022-40
https://www.iotforall.com/how-iot-is-transforming-endurance-sports
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00690.warc.gz
en
0.949372
1,414
3.3125
3
Messaging apps such as WhatsApp, Discord, and WeChat are a great way to keep in touch with friends and family. But not all messaging apps What is it? Multi-factor Authentication (MFA) is the process of verifying that you are who you claim to be when logging in to a device or an account. If you’re reading this from your work computer, you probably logged in to your computer – that’s single-factor authentication. But single-factor authentication is no longer enough to keep your accounts secure. Learn more below about the various ways you can digitally authenticate your identity. Understanding the Types of Identity Claim Factors: - Something you own. This is using a mobile phone or device that you have in your possession to prove your identity. Typically, the device provides a code via an application, text message, email, or voice call. You then enter this code, and for successful authentication, your code must match what is expected by the service you’re attempting to log in to. - Something you know. This is something you’ve memorized or stored somewhere, such as a PIN. You must supply the correct PIN to log in to your device or service. - Something you are. This factor is something about your physical body that cannot be altered, such as your fingerprint or retina. Biometric scanners or readers are used to confirm you’re physically the person that you’re claiming to be. Why do I need it? In our digitally-driven world, passwords are no longer enough to keep your information safe. These days, it takes minimal effort for hackers to break into, or social engineer their way into, accounts that are only protected by passwords. Adding an extra step to access your accounts, such as entering an authentication code, means that hackers would also need to have your phone to break in. Create an additional layer of security and make it harder for criminals to access your data by using two-factor or multi-factor authentication. Consult your IT or Security department to see if your organization has a preferred method of multi-factor authentication. The KnowBe4 Security Team
<urn:uuid:b6d9ea65-9d56-42bc-8020-1b817cc134f8>
CC-MAIN-2022-40
https://mapolce.com/news/create-an-additional-layer-of-security-with-multi-factor-authentication-mfa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00090.warc.gz
en
0.93783
447
3.375
3
According to a study from Frost & Sullivan, smart cities are anticipated to create business opportunities with a market value of over $2 trillion by 2025. April 9, 2018 By SP&T Staff By 2050, over 80 per cent of the population in developed countries is expected to live in cities, and over 60 per cent for the developing world. Smart city projects are expected to play an important role in urbanization, says Frost & Sullivan. Additionally, AI plays a key role in smart cities in the areas of smart parking, smart mobility, the smart grid, adaptive signal control and waste management. Major corporations such as Google, IBM and Microsoft remain key tech innovators and the primary drivers of AI adoption, the study says. Other significant findings include: • AI, personalized healthcare, robotics, advanced driver assistance systems (ADAS), distributed energy generation and other technologies are believed to be the technological cornerstones of smart cities of the future • The Asia-Pacific region is anticipated to be the fastest-growing region in the smart energy space by 2025 • In Asia, more than 50 per cent of smart cities will be in China. Smart city projects will generate $320 billion for China’s economy by 2025. • The total North American smart buildings market, comprising the total value of smart sensors, systems, hardware, controls and software sold, is projected to reach $5.74 billion in 2020. • Europe will have the largest number of smart city project investments globally, given the engagement that the European Commission has shown in developing these initiatives. “Currently most smart city models provide solutions in silos and are not interconnected. The future is moving toward integrated solutions that connect all verticals within a single platform. IoT is already paving the way to allow for such solutions,” added Vijay Narayanan, visionary innovation senior research analyst at Frost & Sullivan. Print this page
<urn:uuid:772c04c5-ab33-434c-9336-1783628ab504>
CC-MAIN-2022-40
https://www.sptnews.ca/frost-sullivan-global-smart-cities-market-to-reach-2-trillion-by-2025/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00090.warc.gz
en
0.912599
386
2.53125
3
During human evolution, the size of the brain increased, especially in a particular part called the neocortex. The neocortex enables us to speak, dream and think. In search of the causes underlying neocortex expansion, researchers at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, together with colleagues at the University Hospital Carl Gustav Carus Dresden, previously identified a number of molecular players. These players typically act cell-intrinsically in the so-called basal progenitors, the stem cells in the developing neocortex with a pivotal role in its expansion. The researchers now report an additional, novel role of the happiness neurotransmitter serotonin which is known to function in the brain to mediate satisfaction, self-confidence and optimism – to act cell-extrinsically as a growth factor for basal progenitors in the developing human, but not mouse, neocortex. Due to this new function, placenta-derived serotonin likely contributed to the evolutionary expansion of the human neocortex. The research team of Wieland Huttner at the Max Planck Institute of Molecular Cell Biology and Genetics, who is one of the institute’s founding directors, has investigated the cause of the evolutionary expansion of the human neocortex in many studies. A new study from his lab focuses on the role of the neurotransmitter serotonin in this process. Serotonin is often called the happiness neurotransmitter because it transmits messages between nerve cells that contribute to well-being and happiness. However, a potential role of such neurotransmitters during brain development has not yet been explored in detail. In the developing embryo, the placenta produces serotonin, which then reaches the brain via the blood circulation. This is true for humans as well as mice. Yet, the function of this placenta-derived serotonin in the developing brain has been unknown. The postdoctoral researcher Lei Xing in the Huttner group had studied neurotransmitters during his doctoral work in Canada. When he started his research project in Dresden after that, he was curious to investigate their role in the developing brain. Lei Xing says, “I exploited datasets generated by the group in the past and found that the serotonin receptor HTR2A was expressed in fetal human, but not embryonic mouse, neocortex. Serotonin needs to bind to this receptor in order to activate downstream signaling. I asked myself if this receptor could be one of the keys to the question of why humans have a bigger brain.” To explore this, the researchers induced the production of the HTR2A receptor in embryonic mouse neocortex. “Indeed, we found that serotonin, by activating this receptor, caused a chain of reactions that resulted in the production of more basal progenitors in the developing brain. More basal progenitors can then increase the production of cortical neurons, which paves the way to a bigger brain,” continues Lei Xing. Significance for brain development and evolution “In conclusion, our study uncovers a novel role of serotonin as a growth factor for basal progenitors in highly developed brains, notably human. Our data implicate serotonin in the expansion of the neocortex during development and human evolution,” summarizes Wieland Huttner, who supervised the study. He continues: “Abnormal signaling of serotonin and a disturbed expression or mutation of its receptor HTR2A have been observed in various neurodevelopmental and psychiatric disorders, such as Down syndrome, attention deficit hyperactivity disorder and autism. Our findings may help explain how malfunctions of serotonin and its receptor during fetal brain development can lead to congenital disorders and may suggest novel approaches for therapeutic avenues.” Brain 5-HT is a neurotransmitter playing a key role in modulating neuronal circuit development and activities. The serotonergic neurons, through their extensive axonal network, are able to reach and influence nearly all the Central Nervous System (CNS) areas. As a consequence, 5-HT regulates a plethora of functions such as sleep and circadian rhythms, mood, memory and reward, emotional behavior, nociception and sensory processing, autonomic responses, and motor activity . Our current understanding of the development, evolution, and function of 5-HT neurotransmission is derived from different model organisms, spanning from invertebrates to vertebrates . It is noteworthy that in all species, the serotonergic network is highly plastic, showing changes in its anatomical organization all through the life of the organisms. 5-HT metabolic pathways, reuptake, and degradation are broadly conserved among multicellular organisms . 5-HT is synthesized from the amino acid tryptophan, which is an essential dietary supplement. Tryptophan is hydroxylated to 5-hydroxytryptophan (5-HTP) by the tryptophan-hydroxylase (TPH) – the rate limiting enzyme for 5-HT biosynthesis. 5-HTP, in turn, is converted in 5-HT by the aromatic L-amino acid decarboxylase. The enzyme TPH has two distinct isoforms encoded by two genes: the Tph1 is expressed in peripheral tissues and pineal gland, while the Tph2 is selectively expressed in the CNS and in the enteric neurons of the gut . Studies on TPH -knockout (KO) mice confirmed that the synthesis of 5-HT in the brain is driven by TPH2, whereas the synthesis of 5-HT in peripheral organs is driven by TPH1 . Since 5-HT is unable to cross the blood–brain barrier, at least in adult life, the central and the peripheral serotonergic systems are independently regulated. The synaptic effects of 5-HT are mainly terminated by its reuptake into 5-HT nerve terminals mediated by the 5-HT transporter. The vast array of brain functions exerted by 5-HT neurotransmission in the CNS is made more complex by the interaction of the 5-HT system with many other classical neurotransmitter systems. Through the activation of serotonergic receptors located on cholinergic, dopaminergic, GABAergic or glutamatergic neurons, 5-HT exerts its effects modulating the neurotransmitter release of these neurons [5,6]. In addition, cotransmission -here defined as the release of more than one classical neurotransmitter by the same neuron—occurs also in 5-HT neurons. Among the cotransmitters released by 5-HT neurons, glutamate , and possibly other amino acids were identified. The regulation and functional effects of this neuronal cotransmission are still poorly understood and are the object of intense investigation . Role of Serotonin in Morphological Remodeling of CNS Circuits In the mammalian brain, 5-HT neurons are among the earliest neurons to be specified during development . They are located in the hindbrain and are grouped in nine raphe nuclei, designated as B1–B9 . Although they are relatively few (about 30,000 in the mouse and 300,000 in humans), they give rise to extensive rostral and caudal axonal projections to the entire CNS, representing the most widely distributed neuronal network in the brain . In addition to its well-established role as a neurotransmitter, 5-HT exerts morphogenic actions on the brain, influencing several neurodevelopmental processes such as neurogenesis, cell migration, axon guidance, dendritogenesis, synaptogenesis and brain wiring . Besides the endogenous 5-HT, the brain of the fetus also receives it from the placenta of the mother. Thus, the placenta represents a crucial micro-environment during neurodevelopment, orchestrating a series of complex maternal-fetal interactions. The contribution of this interplay is essential for the correct development of the CNS and for long-term brain functions . Therefore, maternal insults to placental microenvironment may alter embryonic brain development, resulting in prenatal priming of neurodevelopmental disorders . For instance, in mice it has been shown that maternal inflammation results in an upregulation of tryptophan conversion to 5-HT within the placenta, leading to altered serotonergic axonal growth in the fetal forebrain. These results indicate that the level of 5-HT during embryogenesis is critical for proper brain circuit wiring, and open a new perspective for understanding the early origins of neurodevelopmental disorders [16,17,18]. The importance of a correct 5-HT level in the brain has been demonstrated by numerous studies on mice models. When the genes involved in 5-HT uptake or degradation are knocked out, the increased 5-HT levels in the brain lead to the altered topographical development of the somatosensory cortex and incorrect cortical interneuron migration [19,20]. On the other hand, the transient disruption of 5-HT signaling, during a restricted period of pre- or postnatal development, using pharmacological (selective serotonin reuptake inhibitor exposure) animal models, leads to long-term behavioral abnormalities, such as increased anxiety in adulthood [21,22]. These animals do not show gross morphological alterations in the CNS suggesting that the lack of cerebral 5-HT may only affect the fine tuning of specific serotonergic circuits. This hypothesis has been recently confirmed using a mouse model in which the enhanced green fluorescent protein is knocked into the Tph2 locus, resulting in lack of brain 5-HT, and allowing the detection of serotonergic system through enhanced fluorescence, independently of 5-HT immunoreactivity. In these mice, the serotonergic innervation was apparently normal in cortex and striatum. On the other hand, mutant adult mice showed a dramatic reduction of serotonergic axon terminal arborization in the diencephalic areas, and a marked serotonergic hyperinnervation in the nucleus accumbens and in the hippocampus . These results demonstrate that brain 5-HT plays a key role in regulating the wiring of the serotonergic system during brain development. Interestingly, the transient silencing of 5-HT transporter expression in neonatal thalamic neurons affects somatosensory barrel architecture through the selective alteration of dendritic structure and trajectory of late postnatal interneuron development in the mouse cortex . Altogether, these findings indicate that perturbing 5-HT levels during critical periods of early development influences later neuronal development through alteration of CNS connectivity that may persist into the adulthood [17,25,26]. Interestingly, recent evidence demonstrated that changes in 5-HT homeostasis affect axonal branch complexity, not only during development but also in adult life . In adult TPH2-conditional KO mice it was shown that the administration of the serotonin precursor 5-hydroxytryptophan was able to re-establish the 5-HT signaling and to rescue defects in serotonergic system organization . Interestingly, in recent elegant experiments that combined chemogenetics and fMRI, it was demonstrated that, in adult mice, the endogenous stimulation of 5-HT-producing neurons does not affect global brain activity but selectively activates specific cortical and subcortical areas. By contrast, the pharmacological increase of 5-HT levels determined widespread fMRI deactivation, possibly reflecting the mixed contribution of central and perivascular constrictive effects . On the whole, findings from genetic mouse models confirm that the level of 5-HT during brain ontogeny is critical for proper CNS circuit wiring, and suggest that alterations in 5-HT signaling during brain development have profound implications for behavior and mental health across the life span. Indeed, a plethora of genetic and pharmacological studies have linked defects of brain 5-HT signaling with psychiatric and neurodevelopmental disorders, such as major depression, anxiety, schizophrenia, obsessive compulsive disorder and Autism Spectrum Disorders (ASD) [17,29,30]. In addition, it is becoming increasingly clear that 5-HT has a crucial role also in the maintenance of mature neuronal circuitry in the brain, opening novel perspectives in rescuing defects of CNS connectivity in the adult. For instance, the potential of 5-HT neurons to remodel their morphology during the entire life is indicated by the well-known capability of 5-HT axons of the adult to regenerate and sprout after lesions [26,31]. However, understanding the cellular and molecular mechanisms underlying the effects of 5-HT during brain development, maintenance and dysfunction is challenging, in part due to the existence of at least 14 subtypes of receptors (5-HTRs) grouped in seven distinct classes (from 5-HT1R to 5-HT7R). All 5-HT receptors are broadly distributed in the brain where they display a highly dynamic developmental and region-selective expression pattern and trigger different signaling pathways. The 5-HT receptors are typical G-protein-coupled-receptors with seven transmembrane domains, with the exception of the 5-HT3 receptor, which is a ligand-gated ion channel . Role of the 5-HT7R in Shaping Neuronal Circuits The 5-HT7R, the last discovered member of the 5-HTR family [33,34], has always been the subject of intense investigation, due to its high expression in functionally relevant regions of the brain [35,36]. Accordingly, several recent data have elucidated its role in a wide range of physiological functions in the mammalian CNS and also in peripheral organs . Interestingly, emerging findings indicate that 5-HT7R is involved in brain plasticity, being one of the players contributing not only to shape brain networks during development but also to remodel neuronal wiring in the mature brain, thus controlling higher cognitive functions (see Section 2.2 and Section 2.3). Therefore, this receptor is currently considered as potential target for the treatment of several neuropsychiatric and neurodevelopmental disorders, (as discussed in Section 3), also in view of the fact that its ligands have a wide range of neuropharmacological effects [38,39]. In the mammalian CNS, the 5-HT7R is mainly expressed in the spinal cord, thalamus, hypothalamus, hippocampus, prefrontal cortex, striatal complex, amygdala and in the Purkinje neurons of the cerebellum [40,41]. This wide distribution reflects the numerous functions in which the receptor is involved, such as circadian rhythms, sleep-wake cycle, thermoregulation, learning and memory processing, and nociception . In mammals, this receptor exhibits a number of functional splice variants due to the presence of introns in the 5-HT7R gene and to alternative splicing. The splice variants of the receptor, named 5-HT7(a), (b), (c) in rodents, and 5-HT7(a), (b), (d) in humans [36,42,43], do not show significant differences in localization, ligand binding affinities, and activation of adenylate cyclase . To date, the only functional difference between the splice variants is that the human 5-HT7(d) isoform displays a different pattern of receptor internalization compared to the other isoforms . The 5-HT7R is a G protein-coupled receptor, that activates at least two different signaling pathways. The classical pathway relies on the activation of Gαs and the consequent stimulation of adenylate cyclase, leading to an increase in cyclic adenosine monophosphate (cAMP). The latter activates protein kinase A (PKA), that in turn phosphorylates various proteins such as the mitogen-activated protein kinase and extracellular signal-regulated kinases (ERK) . Another 5-HT7R pathway depends on the activation of Gα12, that in turn triggers stimulation of Rho GTPases, Cdc42 and RhoA; these intracellular signaling proteins, critical for the regulation of cytoskeleton organization, lead to morphological modifications of fibroblasts and neurons . 5-HT7R signaling also involves changes in intracellular Ca2+ concentration and Ca2+/calmodulin pathways [46,47], as well as PKA independent mechanisms which include exchange protein directly activated by cAMP (EPAC) signaling . 5-HT receptor signaling has been recently shown to also depend on their oligomerization. In particular the 5-HT7R can form homodimers, as well as heterodimers with 5-HT1AR . The latter, when is in a monomeric conformation, causes a decrease in cAMP concentration through activation of the Gi. Heterodimerization with 5-HT7R inhibits the 5-HT1AR cAMP signaling pathway, while homodimerization of both receptors do not influence the respective cAMP pathways. These findings suggest that oligomerization of G-protein-coupled-receptors may have profound functional consequences on their downstream signaling, thus triggering cellular and developmental-specific regulatory effects. Role of the 5-HT7R in Shaping Neuronal Circuits during Development The influence of the 5-HT7R on neuronal morphology has stimulated interest in studying its potential role in the establishment and maintenance of brain connectivity and in synaptic plasticity. The availability of selective agonists and antagonists, as well as that of genetically modified mice lacking the 5-HT7R, has shed light on the physio-pathological role of this receptor [39,50,51]. By using rodents’ primary cultures of hippocampal neurons and various 5-HT7R agonists in combination with selective antagonists, it was consistently shown that the pharmacological stimulation of the endogenous 5-HT7R promotes a pronounced extension of neurite length [48,52,53]. The morphogenic effects of 5-HT7R stimulation have also been demonstrated in cultured neurons from additional embryonic forebrain areas, such as the striatum and the cortex [54,55] (Figure 1). Neurite elongation was shown to rely on de novo protein synthesis and multiple signaling systems, such as ERK, Cdk5, the RhoGTPase Cdc42 and mTOR. These pathways converge to promote the reorganization of the neuronal cytoskeleton through qualitative and quantitative changes of selected proteins, such as microtubule-associated proteins and cofilin [54,56]. In hippocampal neurons, it has been demonstrated that 5-HT7R finely modulates the NMDA receptors activity [57,58]. Furthermore, 5-HT7R activation increases phosphorylation of the GluA1 AMPA receptor subunit and AMPA receptor-mediated neurotransmission in the hippocampus [59,60]. Consistent with these findings, 5-HT7R-KO mice display reduced LTP in the hippocampus . Chronic stimulation of the 5-HT7R/Gα12 signaling pathway promotes dendritic spine formation, enhances basal neuronal excitability, and modulates LTP in organotypic slices preparation from the hippocampus of juvenile mice. Interestingly, 5-HT7R stimulation does not affect neuronal morphology, synaptogenesis, and synaptic plasticity in hippocampal slices from adult animals, probably due to decreased hippocampal expression of the 5-HT7R during later postnatal stages . It has been recently hypothesized that this decline could be due to the simultaneous upregulation of the microRNA (miR)-29a in the developing hippocampus. Indeed 5-HT7R mRNA is downregulated by the miR-29a in cultured hippocampal neurons, and miR-29a overexpression impairs the 5-HT7R-dependent neurite elongation . Neuronal remodeling is highly influenced by the extracellular matrix. Accordingly, it has been shown that the physical interaction between the 5-HT7R and the hyaluronan receptor CD44, a main component of the extracellular matrix, plays a crucial role in synaptic remodeling. Briefly, stimulation of the 5-HT7R increases the activity of the metalloproteinase MMP-9, which, in turn, cleaves the extracellular domain of CD44. This signaling cascade promotes detachment from the extracellular matrix, thus triggering dendritic spine elongation in the hippocampal neurons of the mice . In accordance with the influence of the 5-HT7R signaling pathways in remodeling developing forebrain neuron morphology, it was shown that prolonged stimulation of this receptor and the downstream activation of Cdk5 and Cdc42 increased the density of filopodia-like dendritic spines and synaptogenesis in cultured striatal and cortical neurons . The crucial role of 5-HT7R in shaping developing synapses (Figure 1) was confirmed by the pharmacological inactivation of the receptor as well as through the analysis of early postnatal neurons isolated from 5-HT7R-deficient mice. It is noteworthy that, when 5-HT7R was blocked pharmacologically, and in 5-HT7R-KO neurons, the number of dendritic spines decreased, suggesting that constitutive receptor activity is critically involved in dendritic spinogenesis. From this point of view, a detailed analysis of dendritic spine shape and density in the brain of 5-HT7R-KO mice at various ages would be crucial to assess the physiological effects of this receptor on neuronal cytoarchitecture. The involvement of 5-HT7R in spinogenesis and synaptogenesis—together with the demonstration that its activation is able to stimulate protein synthesis-dependent neurite elongation, as well as axonal elongation [54,56]—suggests the intriguing possibility that the activation of this receptor may be linked to the axonal and synaptic system of protein synthesis. The local system of protein synthesis has been demonstrated to play a crucial role in synaptic plasticity—although its regulatory mechanisms are only partially understood [66,67,68]—and 5-HT7R and its related pathways are good candidates to be part of this system. Role of the 5-HT7R in Remodeling Neuronal Circuits in Adults Neuronal circuits remain able to reorganize in response to experience well into adulthood, continuing to exhibit robust plasticity along the entire life . Consistently, the action of 5-HT7R on the modulation of neuronal plasticity is not restricted to embryonic and early postnatal development, but can also occur in later developmental stages and in adulthood (Figure 1). Interestingly, it was shown that selective pharmacological stimulation of 5-HT7R during adolescence determines its persistent upregulation in adult rat forebrain areas . Likewise, it has been hypothesized that 5-HT7R may underlie the persistent structural rearrangements of the brain reward pathways occurring during postnatal development, following exposure to methylphenidate, the elective drug for the treatment of Attention Deficit Hyperactivity Disorder . Accordingly, stimulation of the 5-HT7R in adolescent rats leads to increased dendritic arborization in the nucleus accumbens—a limbic area involved in reward—as well as increased functional connectivity in different forebrain networks likely to be involved in anxiety-related behavior . Changes in dendritic spine formation, turnover and shape occur during the entire life span in response to stimuli that trigger long-term alterations in synaptic efficacy, such as LTP and LTD [73,74,75]. Consistently, it has been shown that the activation of 5-HT7R in hippocampal slices from wild type mice (as well as in Fragile X Syndrome mice, see next paragraph) reverses LTD mediated by metabotropic glutamate receptors (mGluR-LTD), a form of plasticity playing a crucial role in cognition and in behavioral flexibility . Moreover, the acute in vivo administration of a selective 5-HTR7 agonist improved cognitive performance in mice . These results are consistent with the hypothesis that long-term changes of synaptic plasticity, which are a substrate of learning and memory formation, lead to neural network rewiring (Figure 1). Accordingly, the 5-HT7R-KO mice exhibit reduced hippocampal LTP, and specific impairments in contextual learning, seeking behavior and allocentric spatial memory [61,77]. Interestingly, the expression level of 5-HT7R in the hippocampal CA3 region, an area of the brain involved in allocentric navigation, decreases with age , suggesting that the spatial memory deficits associated with aging could be attributed to decreased 5-HT7R activity in this region of the brain. Conversely, another group reported that hippocampal expression of 5-HT7R does not change with age, but exhibits 24 h rhythms . This observation should be taken into account in the interpretation of previous findings, as well as in planning future experiments. Several other studies have produced contradictory results related to the involvement of 5-HT7R in memory and attention-related processes [80,81], probably due to experimental differences (animal strain, behavioral tests, compounds and doses, route of administration, etc.). In conclusion, although the role of this receptor on cognitive functions needs to be fully elucidated, it is clear that it modulates various aspects of learning and memory processes. Interestingly, the 5-HT7R is also involved in bidirectional modulation of cerebellar synaptic plasticity, since its activation induces LTD at the parallel fiber-Purkinje cell synapse, whereas it blocks LTP induced by parallel fiber stimulation . These results suggest that the receptor might be involved in motor learning, a cognitive function depending on the activity of cerebellar circuits . Altogether, these findings strongly suggest that the 5-HT7R plays a role in modulating synaptic plasticity and neuronal connectivity in both developing and mature brain circuits, although the molecular and cellular mechanisms underlying this modulation are only partially understood (Figure 1). reference link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7013427/ More information: Lei Xing et al, Serotonin Receptor 2A Activation Promotes Evolutionarily Relevant Basal Progenitor Proliferation in the Developing Neocortex, Neuron (2020). DOI: 10.1016/j.neuron.2020.09.034
<urn:uuid:bd5eb8e3-78d7-41eb-9773-038f0938652c>
CC-MAIN-2022-40
https://debuglies.com/2020/10/25/the-happiness-neurotransmitter-serotonin-contributed-to-the-evolutionary-expansion-of-the-human-neocortex/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00090.warc.gz
en
0.926139
5,546
2.921875
3
What Are Google Dorks? Table of Contents - By Greg Brown - Aug 26, 2022 Hundreds of unique expressions and idioms enter the world’s lexicon daily. Google Dorking is a phrase that demands further attention. Dorking is a hacking approach applying advanced search operators to identify confidential material not thoroughly protected on a website or server. The information is generally not accessible for public viewing when applying familiar search queries. Hackers use Dorking to exploit security holes and open vulnerabilities of internet assets. Dorking is nothing more than using advanced search syntax to reveal hidden information on public websites. At its core, Google Dorking is a way to use the search giant to pinpoint vulnerabilities, flaws, and sensitive information from websites that can be taken advantage of. Dorking can work on platforms like Bing, Yahoo, and DuckDuckGo, exposing PDFs and forgotten documents still available if you know how to search. As a passive cyber-attack, Google Dorking returns valuable intelligence, such as usernames and passwords, email lists, and personally identifiable financial information. The search giant’s advanced syntax is used to discover the forgotten. Of course, cyber-criminals quickly took advantage of the technology. Dorks had its roots in 2002 when a man named Johnny Long searched for specific website elements using custom queries. Johnny Long Changed Everything Johnny Long gained fame as a prolific author and well-known speaker on computer security. Long is known for his expertise in Google hacking and was one of the earliest pioneers in the field. Johnny’s current endeavor is his deep involvement with Hackers for Charity. Long coined Google dorks, which initially referred to “an incompetent or foolish person as revealed by Google.” The term illustrated Dork is not a Google issue but rather the result of unintentional misconfiguration on the administrator’s part. Over time, Dorks became synonymous with search queries that located sensitive information and the vulnerability of web applications. Long was an early pioneer in Dorking using the search giant’s syntax. In his prior work with Computer Sciences Corporation, he determined it was feasible to locate servers running unprotected software with specifically constructed search queries. Johnny also realized he could discover servers and websites that openly shared personal financial information, including social security and credit card numbers. The Birth of GHDB The efforts by Long grew into the extraordinary Google Hacking Database. The GHDB is an ever-expanding assortment of Dorks used to identify publicly accessible information hidden from view. The hacking database is a categorized index of search engine queries. Each Dork brings to light interesting, and usually, sensitive information made publicly available. The GHDB is part of CVE.org, the government’s global endeavor to define and catalog cybersecurity vulnerabilities. There are currently 182,410 CVE records available for download. Is Google Dorking Good or Bad? Dorking lets hackers use the search engine’s syntax to its full potential, exposing confidential information on various public websites and servers. Live security cameras and similar assets can be successfully hacked if they have no passwords. Unprotected electronic devices and sensitive information from the new camera-enabled devices can be accessed easily. If no password or protective entry is enabled for any of these electronics, Dorking is the way to get in. Google Dorking is not illegal; however, accessing and downloading sensitive data from any government website might be. It is easy for Google, tech companies, and government authorities to figure out what you are downloading and viewing. Be Careful When Dorking! If used correctly, Google Dorks can be a valuable resource to web admins and others. Dorks can uncover long-forgotten email addresses and lists. Web admins can use Dorks to find vulnerable files and folders in their websites. There are presently 7,527 Dorks in the GHDB database, with new entries added all the time. Each entry and syntax are unique, offering all types of individualized intelligence. Here are just a few examples of what Google Dorking could look like: |Intitle:’olt web management interface’ Portals||Pages containing management Login| |Inurl:viewer/live/index.html||Various Online devices| |Intitle:index of”/venv”||Sensitive Directories| |Inurl:’admin/default.aspx||Pages containing login portals| |Intitle:” index of” intext:”Apache/2.2.3”||Files containing juicy info| |Intitle:”Welcome to Windows 2000 Internet Services”||Web server detection| |Filetype: vsd vsd network -samples -examples||Network Vulnerability| A Google Dork query is a search string using advanced analytical operators to locate information. Dorks may have criteria embedded in the sequence, narrowing the search. Multiple search parameters can return specific files from a particular website or domain. In 2011, a group of hackers discovered 43,000 social security numbers of people associated with Yale University using Google Dorking. Another event transpired in October 2013 when approximately 35,000 websites were compromised by hackers working with Google Dorking. In August 2014, The United States Department of Homeland Security, FBI, and the National Counterterrorism Center warned against Google Dorking of their sites. Proposals were submitted to measure possible attack parameters and discover the information intruders were accessing. Most online users accept Google as purely a search engine for locating websites, videos, and keeping up with current events. However, Google can be an effective hacking tool in the wrong hands. The search giant does not condone its services being used harmfully. Using Dorks to hack websites and servers illegally is unacceptable to Google. Get Paid to Hack Companies such as Google, Apple, and Microsoft pay white hat hackers big bucks to identify flaws in their systems and applications. White-hats search for bugs while running the software in everyday situations. As recently as 2019, Google dished out over a million dollars to white-hats who found an abundance of security defects in the system. Join Google’s Bug Hunting Community and discover company product vulnerabilities. Get started with Bug Hunter University to access tips, brush up on skills, and grow with the community. Three Steps To Start Bug Hunting - Prep and gain inspiration from the community or start hunting. - Share your findings with Google. - Collect your Bugs as Digital Trophies and earn money from the big G. A vast number of bugs and viruses affect every computer system on earth. Significant technology-driven companies such as Google and Apple find it highly beneficial to have individuals and communities tracking down these bugs. There are thousands of different viruses that hit computer systems every day. Dorking becomes more sophisticated with each new person that becomes involved.
<urn:uuid:fb073ac3-7775-47d6-80dc-15b29e6aed6c>
CC-MAIN-2022-40
https://www.idstrong.com/sentinel/what-are-google-dorks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00090.warc.gz
en
0.907682
1,437
2.5625
3
Aliweb: The World’s First Internet Search Engine I assume, like most Internet users you will use a search engine, probably Google. Did you know Nexor invented the first search Engine? Back in 1992, Martijn Koster, a software developer at Nexor, built some software to manage and index the emerging Web. His work, called Aliweb, is acknowledged as the world’s first search engine. (The BBC incorrectly cite WebCrawler, but Aliweb was announced in 1993 prior to WebCrawler and presented at the first World Wide Web conference in 1994). Here is the documentation from the time: The World Wide Web is growing too big to find things easily. It is impossible to keep track of all the services other people provide; they change often, and there are simply too many of them. Therefore ALIWEB proposes that people just keep track of the services they provide, in such a way that automatic programs can simply pick up their descriptions, and combine them into a searchable database. Because automatic programs cannot understand natural language, these descriptions will have to be written in a concise and standard format. So the way ALIWEB works is as follows: - People write descriptions of their services in a standard format into a file on the Web, by hand or using automatic tools. - They then tell ALIWEB about this file. - ALIWEB regularly retrieves all these files, and combines them into a searchable database. - Anybody can come and search this database from the Web. Because the database can be updated regularly (currently once a day) the data is very up-to-date. Since ALIWEB does all the work of retrieving and combining these files, people only need to worry about descriptions of their own services; so the information is likely to be correct and informative. And as only these small description files need to be gathered there is little overhead. As other search services started to emerge, using a spidering technique (invented by Koster at Nexor), Aliweb became one of many such search engines. Their individual coverage was limited, and to find what you wanted you really had to choose the right search engine. To solve this problem, Koster went on to develop CUSI, (Configurable Unified Search Interface) which enabled you to send a single query to multiple search engines. The interface is still visible on the WayBack Machine. …then came Google… NB: www.aliweb.com is a commercial company, and nothing to do with Nexor or the original Aliweb search engine research. Author Bio – Colin Robbins Colin Robbins is Nexor’s Managing Security Consultant. He is a Fellow of the IISP, and a NCSC certified Security and Information Risk Adviser (Lead CCP) and Security Auditor (Senior CCP). He has specific technical experience in Secure Information Exchange & Identity Systems and is credited as the co-inventor of LDAP. He also has a strong interest in security governance, being a qualified ISO 27001 auditor. Be the first to know about developments in secure information exchange
<urn:uuid:ea05ee1b-8efa-4df8-843f-9e715cdae9d0>
CC-MAIN-2022-40
https://www.nexor.com/aliweb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00090.warc.gz
en
0.932799
660
3.203125
3
NAT ( Network Address Translation ) NAT (abbreviation for Network Address Translation) is described in RFC 1631. NAT feature allows IP network of an organization to appear from the outside to use a different IP address space (Globally Routable) than what it is actually using (Non Routable Private IP Address). Related – NAT Interview Questions and Answers Thus, NAT allows an organization with non-globally routable addresses to connect to the Internet by translating those addresses with globally routable address space. Below are some of the key terms related to NAT which play a pivotal role in IP address Translation: Inside Local Address Inside Global Address Outside Local Address Outside Global Address However, let’s discuss the fundamentals by breaking up the above terms: - Inside = Under the control of the company or customer. This will reside inside the network. - Outside = Customer or company can’t control and reside outside the customer network. - Local = Private addresses under RFC 1918 . This refers to what happens on the inside of your network. - Global = Public IP addresses which are Globally routable addresses. This refers to what happens on the outside of the customer network. Now when we understand the basics of each word, lets come back to the 4 key terms of NAT : Inside Local Address – Private addresses that the company can control. This is the IP address assigned to an end host on the inside [p2p type=”slug” value=”what-is-local-area-network”]LAN[/p2p] network.The IP address is provided by the company itself and is not required to be taken from IP address authority or Service provider. This address is likely to be an RFC 1918 private address. Inside Global Address – Public addresses that the company can control. An example is the IP address ISP provides to the organization and is a Globally routable IP address assigned by the service provider. Inside global address represents one or more inside local IP addresses to the outside world ie translates from inside Local address and is seen by the outside world on the Internet. Outside Local Address – Private Addresses that are outside of company/organisation control. This is the address that the inside hosts use to refer an outside host. The outside local address may be the outside host’s actual address or another translated private address from a different private address block. In other words – The IP address of an outside host as it is known to the hosts on the inside network. Outside Global Address – Public addresses that are outside of company control. These are Globally Routable addresses and is the public IP address assigned to the end device on the other network to communicate over the internet. The owner of the host assigns this address. Below diagram will help put more clarity on the 4 types of addresses discussed above:
<urn:uuid:175e9aa7-699b-4b13-9ac6-04a743683baa>
CC-MAIN-2022-40
https://ipwithease.com/nat-understanding-local-global-inside-and-outside-addresses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00090.warc.gz
en
0.92904
609
3.640625
4
HTTP PATCH flood is a layer 7 DDoS attack that targets web servers and applications. Layer 7 is the application layer of the OSI model. The HTTP protocol – is an Internet protocol which is the basis of browser-based Internet requests, and is commonly used to send form contents over the Internet or to load web pages. HTTP PATCH floods are designed to overwhelm web servers’ resources by continuously requesting single or multiple URL’s from many sources attacking machines, which simulate an HTTP clients, such as web browsers (Though the attack analyzed here, does not use browser emulation). An HTTP PATCH Flood consists of PATCH requests. Unlike other HTTP floods that may include other request methods such as POST, PUT, GET, etc. When the server’s limits of concurrent connections are reached, the server can no longer respond to legitimate requests from other clients attempting to PATCH, causing a denial of service. HTTP PATCH flood attacks use standard URL requests, hence it may be quite challenging to differentiate from valid traffic. Traditional rate-based volumetric detection is ineffective in detecting HTTP PATCH flood attacks since traffic volume in HTTP PATCH floods is often under detection thresholds. However, HTTP PATCH flood uses the less common PATCH method. As such, it may be beneficial to review network traffic carefully when witnessing many such incoming requests. To send an HTTP PATCH request client establishes a TCP connection. Before sending an HTTP PATCH request a TCP connection between a client and a server is established, using 3-Way Handshake (SYN, SYN-ACK, ACK), seen in packets 108,135,136 in Image 1. The HTTP request will be in a PSH, ACK packet. Image 1 – Example of TCP connection An attacker (IP 10.0.0.2) sends HTTP/1.1 PATCH requests, while the target responds with HTTP/1.1 403 Forbidden as seen in Image 2. While in this flow we see an HTTP/1.1 403 Forbidden response, that might change depending on the web server settings. Image 2 – Example of HTTP packets exchange between an attacker and a target: As seen in image 3. The capture analyzed is around 3 seconds long while it contains an average of 78 PPS (packets per second), with an average traffic rate of 0.06 Mbps (considered low, the attack you are analyzing could be significantly higher). Image 3 – HTTP Flood stats Analysis of HTTP PATCH Flood in WireShark – Filters “http” filter – Will show all http related packets. “http.request.method == PATCH” – Will show HTTP PATCH requests. It will be important to review the user agent and other HTTP header structures as well as the timing of each request to understand the attack underway. Download example PCAP of HTTP PATCH Flood attack *Note: IP’s have been randomized to ensure privacy.Download
<urn:uuid:3809744e-0437-4947-a544-9e945672d930>
CC-MAIN-2022-40
https://kb.mazebolt.com/knowledgebase/http-patch-flood/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00090.warc.gz
en
0.896607
617
2.75
3
Iris Recognition Used to Secure Borders Due to the disastrous events of 9/11 and the continued threat of global terrorism, governments worldwide continue to respond to heightened levels of security on national borders. Slowdowns at airports however can hinder global trade, so governments are working to balance the need for both streamlined passenger air travel and enhanced public security. To make the process of security more manageable, governments have been turning to iris recognition in order to secure borders and provide a more practical substitute for passports. What is iris recognition? Iris recognition has been defined as an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of an individual’s eyes, whose complex random patterns are unique and can be seen from some distance. Digital templates encoded from these patterns by mathematical and statistical algorithms allow unambiguous positive identification of an individual. Many millions of persons in several countries around the world have been enrolled in iris recognition systems, for convenience purposes such as passport-free automated border-crossings, and some national ID systems based on this technology are being deployed. The key advantages of iris recognition is its speed of verification and its extreme resistance to false matches, along with the absolute uniqueness of the iris as a personal identification marker. In the UK, border control points at Heathrow and Gatwick airports use iris recognition and will accelerate its use for the 2012 London Olympics. In Canada and the United States, iris recognition is also used for NEXUS, a bi-national program for pre-approved, low-risk travelers entering Canada or United States at designated air, land, and marine entry ports. Iris recognition is arguably the most dependable way to secure border crossings. Iris scanning is an ideal security measure since the iris does not change its form or shape over one’s lifetime and because iris recognition cannot be altered even when wearing glasses or contact lenses. And because the technology is non-invasive, the iris can be can scanned from mere centimeters to a few meters away.
<urn:uuid:f1a1fab9-92b1-4861-9845-209fac89a906>
CC-MAIN-2022-40
https://www.biometricupdate.com/201205/iris-recognition-used-to-secure-borders
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00090.warc.gz
en
0.930958
415
3.3125
3
The Supreme Court might have stirred up a bigger problem than it settled when it ruled last June that file-sharing networks such as Grokster could be sued if their members pirated copyrighted digital music and video. Since then, some programmers have announced they would pursue so-called darknets. These private, invitation-only networks can be invisible to even state-of-the-art sleuthing. And although they’re attractive as a way to get around the entertainment industry’s zeal in prosecuting digital piracy, they could also create a new channel for corporate espionage, says Eric Cole, chief scientist for Lockheed Martin Information Technology. Cole defines a darknet as a group of individuals who have a covert, dispersed communication channel. While file-sharing networks such as Grokster and even VPNs use public networks to exchange information, with a darknet, he says, “you don’t know it’s there in the first place.” All an employee has to do to set one up is install file-sharing software written for darknets and invite someone on the outside to join, thus creating a private connection that’s unlikely to be detected. “The Internet is so vast, porous and complex, it’s easy to set up underground networks that are almost impossible to find and take down,” says Cole. He advises that the best—and perhaps only—defense against darknets is a combination of network security best practices (such as firewalls, intrusion detection systems and intrusion prevention systems) and keeping intellectual property under lock and key. In addition, he says, companies should enact a security policy called “least privilege,” which means users are given the least amount of access they need to do their jobs. “Usually if a darknet is set up it’s because an individual has too much access,” Cole says.
<urn:uuid:00c9ed8d-216b-4c73-be20-c036d2c636e5>
CC-MAIN-2022-40
https://www.cio.com/article/252375/compliance-file-sharing-spies-in-the-server-closet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00090.warc.gz
en
0.956019
392
2.53125
3
Originally spam emails got their name from a canned meat product, SPAM, as it was referenced in Monty Python, the iconic British comedy show. Despite its comedic origins, spam is no joke. Spam, also known as junk mail, can take many forms. However, most often spam refers to unsolicited bulk email. In addition to spam emails, similar messages on social media and instant message services are a form of spamming. Spam emails are nowadays so common that most, if not all, internet users have encountered them in one form or another. Even if most people recognize these messages as spam, it is enough for only a fraction of them to get fooled as cyber criminals can send out spam messages in bulk. As people get better at recognizing messages as spam, criminals also adapt and develop more nuanced methods for deceiving their victims. Not all spam messages are as obviously fake as the so-called Nigerian prince scams. So do not think that only less informed internet users can fall for spam messages. As spam emails are the most common type of junk mail, let’s focus on the different ways criminals use email communication to prey on their victims. Knowing the different methods of spam allows you to better recognize junk mail and avoid getting fooled by it. The goal of phishing emails is to get you to download an attachment file, click a link that takes you to a harmful website or reveal sensitive information to the sender. Phishing messages can be disguised as something that the victim might find interesting, and in many cases, the email looks like it’s coming from a reliable sender. Smishing messages, on the other hand, have much of the same goal but use text and instant messages instead of email. Malware spam or malspam As the name suggests, malware spam is used to infect the recipient’s device with malware, such as trojans, spyware or ransomware. The viruses hidden in malware spam can be disguised as attachments, including PDFs, text documents and presentations. In order to trick you, the senders of spam try to make it seem like you have received a message from a trustworthy and legitimate source. By masking the real sender’s identity, cyber criminals can make spam messages look less suspicious to the recipient. Even if it looks like you have received an email from a well-known authority, social media service, the post office or a bank, always make sure that the message is not fake. IT support scams Speaking of disguising the message’s sender, a common method is to make it look like the email is coming from IT support. This type of junk mail can be sent in the name of large companies, such as Apple and Microsoft, claiming that there is some technical issue or your account has been compromised. Your account may be locked if you don’t click a link in the message, or so the message claims. This way cyber criminals can combine a legitimate source with a sense of urgency in order to deceive you. The alarm bells should be ringing especially if IT support asks you for sensitive information, such as passwords and online banking credentials. Spam emails can take the form of ads as well. Many of us have subscribed to different online newsletters and receive legitimate ads via email daily. If the ad’s offer seems too good to be true, it’s most likely a scam. Subscription traps are a way to mislead consumers and trick them into making long-term subscriptions. The victim of a subscription trap may not even know they subscribed to something until they receive a bill for their supposed order. The terms of such subscriptions are vague or non-existent, and canceling the subscription is made to be as difficult as possible. Spam is a common issue for both big and small targets, organizations and individuals. No matter how small someone may look as a potential victim, they can receive spam. By using bulk email, cyber criminals can target large quantities of victims with little effort. On top of that, spamming is cheap, considering the potential returns for the perpetrators. So don’t think that you are not interesting enough as a target of online spamming. Here are some reasons why both individuals and larger targets should take spam and junk email seriously. Many internet service providers (ISPs) and email services have ways to filter and block spam. However, this may not be enough and other measures are needed as well. Luckily there are some things you can do to direct junk mail into your spam folder instead and recognize spam that has ended up in your inbox. Spam is far from the only threat online and you need to be prepared for everything cyber criminals may come up with, whether on mobile or desktop. F‑Secure TOTAL offers all you need to stay safe online. Do your online shopping, browse in private, manage all of your passwords and stream content without worry. Try F‑Secure TOTAL for free today!
<urn:uuid:ae0c953d-9eb9-483f-bca8-48b3474a28da>
CC-MAIN-2022-40
https://www.f-secure.com/en/home/articles/spam
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00090.warc.gz
en
0.957442
1,021
3.21875
3
Army Research Laboratory (ARL) has developed a large set of thermal face data to support facial recognition in dark environments, FedScoop reported Monday. ARL collaborated with Booz Allen Hamilton, West Virginia University, the University of Nebraska-Lincoln and Johns Hopkins University to build the ARL Visible-Thermal Face Dataset. The team used a long wave infrared camera to capture over 500,000 thermal images of 395 faces. Thermal data on a subject's head pose, expressions and eyeglasses serve as references to reproduce physical conditions. The Army expects the dataset to allow for facial recognition in scenarios with no sunlight, through the help of artificial intelligence. The service branch seeks to further develop thermal identification despite the lower quality of thermal images compared to traditional ones.
<urn:uuid:d3db8dc8-811f-493f-9a78-ef21b608b937>
CC-MAIN-2022-40
https://executivegov.com/2021/01/army-led-multisector-team-creates-dataset-for-thermal-facial-recognition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00290.warc.gz
en
0.866514
158
2.671875
3
Whether in times of prosperity or instability, it is the responsibility of those working in the public sector to communicate transparently with constituents and provide updates that reinforce community safety, support and education—and it is in the best interest of these institutions to find ways to involve the public and make them feel heard. As local and state governments look to create an open dialogue of communication surrounding today’s biggest social, environmental and political issues, updating the public in real time has become a way for leaders to steer the conversation forward. There is a new insurgence of participation in local, state and federal conversations when it comes to public policy. The public is feeling empowered to be heard and taking different avenues to get their voices and ideas across to our community leaders. As we’ve seen over the past month, protestors are marching to amplify their voices on social issues. What’s happening behind the scenes powering many of these marches is the use of video conferencing tools, like BlueJeans, to both plan and follow up with city councils, police commissioner’s offices and local leaders—and these institutions are responding in kind. The City of Santa Monica, for example, held a virtual celebration of Juneteenth over BlueJeans Events called “Moving Forward Together” to have a panel discussion on the inequities Black Americans and Communities of Color have faced. Prior to COVID-19, such public meetings would generally have taken place in person or via a telephone conference line, making it difficult for the public to sense what is and isn’t getting through to elected officials. But with COVID-19 abruptly bringing video conferencing to the forefront as a safer way to communicate with family, peers and constituents alike, many of these types of forums and hearings have moved over to the virtual world. Video conferencing in the public sector is changing the game when it comes to making social change for the following reasons: 1. Easier to Participate a. Video conferencing is giving people the ability to participate in something they’d otherwise not. Being able to join a public hearing virtually from the comfort of your home is giving professionals, parents and youth a more accessible way to participate wherever they happen to be. Instead of taking off work and driving to a public hearing, anyone can join online and become informed. Tools like BlueJeans Events can simply be accessed by common web browsers. Once an event is scheduled, organizers simply share URL join links with moderators, presenters and attendees for one-click entry. b. More people may feel inclined to voice their opinion who otherwise wouldn’t have. Standing up in public and speaking to your peers can be frightening and counter-productive to social change. With video conferencing, individuals can join and speak freely in a comfortable setting increasing overall engagement. c. Video conferencing can provide organization to mass-conversations. While community participation opens a two-way dialogue between speakers and audience members, additional control is needed to keep the public message free from rogue commentary. BlueJeans Events comes with standards-based administrative safeguards that prevent damaging activity to any live event, as well as new features to maximize control and management of public streaming events. 2. Video Recording Equals Longer Impact a. Visuals play a powerful role in our modern digital world. By having public hearings recorded there can be a longer lasting impact and accountability to what is discussed. Video clips can be shared online, picked up by other media platforms and shared with the public. This ability to capture and circulate exactly what happened gives more power to the public opinion. The more voices that are heard by more members of the community creates more accountability of our public officials. 3. Quicker Actions a. Video conferencing allows for quicker actions to be taken. Especially in the current environment, where in-person meetings should be limited, it’s possible for decisions to be pushed until a “live meeting” can happen. With video conferencing all impacted parties can join and discuss, like in-person, the appropriate next steps. Things like COVID-19 won’t be slowing down the process for change. Learn more and download your free 30-day BlueJeans Events trial today.
<urn:uuid:efb663ee-b2d0-43d0-a506-4f072af5e162>
CC-MAIN-2022-40
https://www.bluejeans.com/blog/video-conferencing-tool-social-change
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00290.warc.gz
en
0.93846
875
2.578125
3
Phishing attacks are growing more sophisticated, and spam filters aren't effective against every kind of attack. Old-style spammers sent the same message to multiple people, hoping some of them would take the bait. The newer, more dangerous approach targets individuals. It's called spearphishing, whaling, or high-stakes phishing. A spearphishing message is built on knowledge of the person it's aimed at. It includes personal details, and it pretends to be from someone the victim knows. It's typically aimed at high-ranking executives or other people who control valuable assets. The messages are cleverly tailored, and they can fool people who would laugh at a traditional attempt to trick them. The aim may be to get account passwords by tricking the target into using a fake login page. The attack could also be designed to place ransomware or spyware on the victim's computer. Some messages urge the target to make a monetary transfer, which goes to the perpetrator's bank account and can't be recovered. Filters and anti-malware software will reduce the risk, but they don't eliminate the problem. This is why it's so important for employees, including top executives, to get training in spotting those messages and avoiding their traps. The Importance of Training The leading cause of computer security incidents is human error. Breaking into a network by exploiting software weaknesses is hard. Tricking people into unlocking the door is a lot easier. Criminals focus on that, more than any other method, for taking control of systems and getting at private data. They keep making their deceptions more elaborate and subtle. When people fall for these tricks, expensive breaches may result. For example, a company could immediately lose confidential information and monetary assets. Furthermore, fixing the problems takes time away from productive work as systems could be unusable for a while. Ransomware attacks are among the worst. If a user receives a fraudulent email message and opens its attachment, the malware in it could encrypt important files on the machine. The company then has to pay to get the files restored (which often doesn't get anything back) or spend time recovering them. As spearphishing messages become more clever, more people fall prey to them. If they don't have training in recognizing the latest tricks, they may mistake a fraudulent message as one from a manager or customer and throw caution away. Senior management needs to receive training along with everyone else. They're the most lucrative targets. A message to them might seem to come from a person they regularly work with. "Some unexpected expenses came up. Please wire $5,000 to bank account 12345678. I need this right away if possible." To a busy executive who trusts the employee, this could seem like a simple matter to be squared away later. But, unfortunately, the bank account belongs to a criminal, and any money sent to it will disappear for good. Training Promotes a Culture of Security There is good training, and there's bad training. The creators of effective training packages understand people and how they learn. Explaining mistakes leads to improvement; making people feel bad leads only to frustration. Employees need to understand that phishing is a serious matter, but they should feel encouraged to improve, not lectured for their stupidity. Promoting a culture of security gives the best results. Employees should consider themselves partners in keeping their systems safe. There should be a sense of satisfaction in deleting or reporting a deceptive message. How managers use email is part of the picture. If they write sloppy messages, employees will get used to them and not question badly written phishing mail. If they claim everything they write about is urgent, they'll desensitize employees to fraudulent messages insisting on an immediate response. If they complain when employees ask for verification of unusual requests, employees will learn not to be skeptical. People who get email need to be encouraged to think about every message they get and to ask questions if something doesn't seem right. The Latest Phishing Tricks Today's phishing email uses tricks that were rare or unknown a couple of years ago. Training courses are essential in keeping employees up to date as these tricks continue to evolve. Secure websites: A few years ago, a link to a secure (HTTPS) website was an indicator that the message came from a trustworthy source. The situation has vastly changed. Anyone can get a TLS certificate for free and set up a secure site without much effort. All that an HTTPS link proves is that what you're seeing is what the server sent you and that no one can intercept the communication. It says nothing about the safety of the content. Social media and other channels: An increasing amount of phishing goes through channels other than email. It can come through SMS messages, messaging applications, and social media communications. People aren't as used to getting fraudulent requests from those channels, and they aren't always on guard about what they get. URL impersonation: This is a very old trick, but some variations of it have become more common lately. Lookalike characters are one method. The Greek letter omicron (ο) is hard to distinguish from the letter o, but some registrars allow it in a domain name. Using a subdomain that looks like a well-known domain, such as microsoft.com.crookedsite.com, is another way to fool people. On a cell phone screen, users may only see the start of the domain and not realize it's a trick. Redirection in attachments: Attachments in an untrusted email are dangerous in several ways. One of the latest tactics is to put a meta redirection header into one. When the victim opens it in the browser, it will take the victim to a malicious website. Since the message doesn't link directly to the rogue site, filters that look for dubious links won't catch it. Security Software Helps But Isn't Enough Email filters will keep many phishing messages out. Anti-malware software reduces the chances that opening one can cause harm. Authentication with SPF, DKIM, and DMARC catches impersonation attempts. However, these protections won't stop all spearphishing messages. Content that is clever enough to fool people will fool software as well. Dangerous links can be disguised. Not everyone uses authentication. Lookalikes won't look alike at the bit level, but obfuscation can make it hard for code to catch them. Anti-malware software won't always catch zero-day exploits. Human judgment is needed as well. If something doesn't look right about a message, the recipient should know what to do. A request by a different channel for verification can clear up the situation, and usually, it isn't necessary to act instantly. Training will help employees, including executives, to spot messages that don't look right. Test messages will help keep their skills sharp and identify types of phony messages they weren't previously aware of. When everyone adopts good email safety practices, a business has fewer security incidents and operates more confidently. Cyber Security Awareness Training At Accent, we understand that the best defense includes both technical and non-technical layers. Contact us to learn about the training options that are available to your business.
<urn:uuid:019d8907-6f10-4d17-a7ba-83aea94b6587>
CC-MAIN-2022-40
https://www.accentonit.com/blog/train-team-spot-high-stakes-phishing?__hstc=45788219.b80440fb305ee1b85b951b599bcb8034.1657655156277.1657655156277.1657655156277.1&__hssc=45788219.1.1657655156277&__hsfp=1544216235
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00290.warc.gz
en
0.956615
1,490
3.203125
3
The Department of Energy has selected 30 projects to receive funding worth a total of $57.9 million from the Advanced Manufacturing Office to decarbonize the U.S. industrial sector and advance clean energy manufacturing practices across the industry. DOE said Thursday the research projects, led by industry partners, universities and the national laboratories, will result in the development of technological innovations designed to support the Biden administration’s goal of reaching net-zero emissions by 2050. “Decarbonizing American industry while expanding our capacity to manufacture clean energy technologies is the surest way to meet the nation’s climate and economic goals,” said Energy Secretary Jennifer Granholm. The research projects will focus on three main topics: manufacturing process innovation, advanced materials manufacturing and energy systems. In a separate announcement, DOE said 10 projects were awarded a total of $25.8 million to develop next-generation manufacturing processes for reducing carbon emissions of energy-intensive industries. Fourteen projects were selected for a $24.6 million funding grant to develop and produce materials designed to reduce the operating costs of wind turbines and extend the life of hydrogen components. The remaining six research projects will create “innovative manufacturing processes for lithium-ion batteries to enhance safety and reduce cost and time-to-market” under a $7.5 million funding grant.
<urn:uuid:02350f2c-52de-4070-8c89-053e15fa69c8>
CC-MAIN-2022-40
https://executivegov.com/2022/06/doe-awards-58m-in-funding-to-advance-clean-energy-technology-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00290.warc.gz
en
0.927388
276
2.59375
3