text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
IoT is the mean that allows to connect physical objects to the cloud. The range of applications is broad, from individual sensors, to devices, vehicles, machines, buildings, factories, and entire cities. These connections unlock great potential as the data collected from physical objects can be used to increase knowledge, allowing the implementation of new business models, and enabling the profound digital transformation that we are currently witnessing around the world. IoT-driven observations often concentrate on the process of data collection, focusing on the results obtained from the data transfer from the assets to the cloud. However, as important that may be, the opposite data flow (from the cloud to the assets) offers as much or in some cases even more value. By sending instructions, configurations and entire new firmware packages to the assets, organizations are able to remotely, and if desired automatically, control each and every asset. These two key functional data flows of IoT (collection/transfer and control) act as support factors towards today’s sustainability efforts. In fact, scaling concrete sustainable solutions that will change the 21st century panorama will not be possible without IoT. Sustainability needs IoT, and IoT needs sustainability. Sustainable solutions drive the development of IoT, which in turn needs to be (or become) sustainable to operate. Sustainable Energy Production In Europe, too often we see news headlines as: “Cities announce the stop in approval of new construction projects due to limitations in the electricity grid capacity” or “The shutdown of a large gas-fired power station has been postponed due to rising electricity demands and insufficient resources”. Overloaded electricity grids and the need to maintain large fossil fuel-based electricity production are just some of the many cases in which the energy transition is constantly challenged. As straightforward as it may sound, IoT can be the solution to alleviate many of these problems, or even definitively solve them. A closer look reveals that the grid limitations are caused by redundancy requirements. In other words, the managers of grids prefer maintaining a certain spare grid capacity in order to meet the agreed reliability standards. If, for example, a substation goes down and part of the grid is out of service, the spare capacity of the remainder of the grid should be able to absorb this extra duty such that end users do not experience any outages. Statistical analysis of potential outages of equipment are part of the model that is used by engineers to determine the spare capacity requirements, leading to diverse outcomes. By simply adding IoT to the transformers and switchgear in substations, the state of the equipment can be constantly monitored, delivering real time data. With the data that is collected and through smart data processing, some outages can be predicted and avoided – or taken as a scheduled outage instead of an unscheduled one. For the issue of not being able to follow up with the pre-established processes of closing fossil-fuel power plants, a similar reasoning can be followed. Fossil fuel plants give a stable source of energy, while renewable power plants can be more unpredictable since they rely on environmental conditions (sun, wind…). The renewable energy picture can also be greatly improved by IoT and data projections, leading to a more stable (or predictable) energy scenario. Every year, buildings consume about 40% of the energy produced worldwide. Said so, the energy transition towards a sustainable future is not feasible without addressing the high consumption of energy used by buildings. There are many initiatives to improve this issue, one of which is the introduction of Near Zero Energy Buildings – or NZEBs. These buildings produce electricity through PV installations and other technologies. At times this production is sufficient to satisfy the electricity need of the building, or even exceed it, delivering the extra production to the electricity grid. Since the building relies on energy generators that depend on weather conditions (as for example solar panels), there are times in which buildings need to take electricity from the grid. If during a year time, there is a balance between the extra energy inputted in the grid, and the one took from the grid, the building is considered to be “Zero Energy”. Conceptually, this may seem straightforward; practically, it is not. The construction of buildings is often cost driven, and it involves many contractors, subcontractors, equipment suppliers and others. In addition, there are strict regulations and vested interests that make innovating construction processes difficult. With IoT, the steps towards implementing the needed solutions for creating NZEBs are easier to take. IoT solutions provide more information about the real-time state of a building and its equipment, enabling a higher degree of control. Moreover, IoT solutions can be added to the construction even after its completion. As one of many examples, it is possible to add a zero-energy wireless sensor network in an existing building, that enables demand driven heating, cooling and ventilation. IoT can boost agriculture’s efficiency, and can bring agriculture closer to people and the markets. By measuring the soil conditions, collecting real-time data on specific areas, and taking advantage of accurate weather predictions, irrigation processes can be correctly optimized. As an example, just by switching on the irrigation system only when needed, for as long as needed, we can register a significant improvement in crop production, and a positive impact on the environment due to the decrease in water consumption. The key is to understand when to act, rather than standardizing pre-defined actions. Agriculture products tend to be transported over long distances to reach distribution points. It is essential to re-evaluate this pattern, especially when wanting to reduce the crop’s carbon footprint and work towards a more sustainable future. One of the many solutions to this issue, is making farming more available near city areas. We can see that this trend is taking over through urban farming and the increasing popularity of local farmer’s markets. These are usually small producers that inevitably compete against the mega farming industries, and that therefore need concepts like precision agriculture to help develop their crops. By using advanced sensing technologies, real time data collection and advanced data processing, the production processes in these smaller farming environments can be optimized, allowing a more competitive cost-to-product. Although the energy-use of IoT devices pales against the energy savings that they can deliver, it is essential to make sure that IoT devices do not become the sustainability problem of the future. Luckily, it is encouraging that significant advances have been made, and that concepts like energy harvesting are starting to become more popular. In the last few years, several IoT sensors and devices have been adapted to become completely powered through conversion of light, temperature, movement or energized through radio signals. These solutions show that optimum configurations can be found through efficient energy transformation, smart power conditioning and optimalisation of use cycles. The use of these and other energy neutral IoT solutions will accelerate in the coming years, as these technologies advance and further optimize. Another impact IoT has on the environment relates to the high use of electronics needed for data transmission and hardware systems. Almost all IoT sensors and devices use electronic components that inevitably generate environmental pressure at the end of their lifecycle. Especially in a rapidly changing field like IoT, economic lifecycles tend to be short as new solutions replace older ones. The use of biodegradable electronics can provide significant benefits in this regard. Biodegradable electronics are still limited to laboratories and small-scale testing, but as with other tech innovations of recent years, it is expected for them to find their way to commercial products in the near future. As an example, in agriculture, these new biodegradable elements would be of extreme benefit since the sensors and IoT devices could be distributed throughout the fields, without having to be recovered and without leaving a negative impact on the environment. Reducing Energy Consumption You do not have to produce energy that you do not consume. In the end, the largest gains can still be made in areas where energy is wasted. In many cases lights, ventilation systems, Wi-Fi routers, cooling systems and many other energy consumers are operated with simple control systems that limit demand driven use. Similarly, maintenance plans with site visits are often scheduled based on regular intervals without considering the actual needs on an asset. By adding IoT communication solutions to these assets, and by bringing the information that’s “hidden” within these assets via IoT systems to on line dashboards and portals, we can substantially improve the experience, increasing the sustainability efforts, while simultaneously reducing costs.
<urn:uuid:76ce3f48-fdc0-49e3-84cd-90eaa116398a>
CC-MAIN-2022-40
https://www.ciocoverage.com/iot-and-sustainability-need-each-other/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00403.warc.gz
en
0.946295
1,747
3.140625
3
As we all know the COVID-19 pandemic has affected every business in some way or the other and has created many new risks as well. Organizations are forced to shut down their operations and employees are working from home. While Industries are busy handling the COVID-19 crisis, this has emboldened cyber criminals to increase attacks on vulnerable organizations. Employees are the first line of defense working on their laptop devices remotely and this has exposed them to hackers using social engineering techniques to steal corporate credentials. As per a recent NASA report, phishing email scams have doubled. This article examines the different types of attack vectors cyber criminals are using these days in the COVID-19 era. We will also learn about how we can protect our organizations from these risks. When we think of cyber security, we think of its components – people, processes, and technology. And looking at the current scenario, all three components are vulnerable and compromised by cyber criminals in some ways. If we analyze the recent attack vector, almost all the attackers are using coronavirus themes, including business email compromise (BEC), credential phishing, malware, and spam email campaigns. The most popular and effective attack is credential phishing. Here’s a list of emerging cybersecurity risks and attack vectors based on recent cybersecurity attacks and related activities during COVID-19. Phishing has always been the basic and the most used attack vector. But in the current pandemic scenario, performing mass attacks on employees has become more popular as they are perceived as low hanging fruits for hackers. Some of the reasons that have made phishing the most the used attack vector in this time are: These fraudulent emails contain logos and other images related with the Center for Disease Control (CDC) and the World Health Organization (WHO). To lure their targets, emails include links to items of interest, such as "updated cases of the coronavirus near you." These links redirect users to landing pages which look legitimate, but the sites are often malicious and may be designed to steal email credentials. Example - credential phishing - ‘‘COVID-19 Infected Our Staff Industry Specific Targeting: (Targeting Affected Industries) Malicious emails directing recipients to educational and health-related websites with malware, thus infecting their systems. Example –hidden malware – Your Neighbors Tested Positive False advice and cures In this difficult time, many people have come forward to help the needy and poor. Hackers are using this as an opportunity to send phishing emails asking for donations disguised as a recognized body/ NGO. Spoofing has been another popular attack vector for many cyber-attacks in recent times. There were cases of hackers spoofing emails from trusted sources, such as government bodies and health agencies, pretending to offer coronavirus tips and advice, and making victims fall into their trap by clicking on the embedded tip sheet, getting their systems infected with malware, or in some instances, it’s encrypted ransomware. Tips: Grammatical mistakes are the most glaring clues hinting at malicious intent and can be commonly seen in many email cyber-attacks impersonating a reputable source or organization. In these hard times, people are worried about their health and are looking for information from different sources to stay safe. Hackers are using techniques such as social engineering and spear-phishing scams, which are both well-known attack vectors for achieving business email account compromise. These attacks are attempts made through email (phishing), voice calls (vishing) or SMS (smishing) by cyber criminals fooling people and collecting sensitive information. Other types of social engineering attacks are as below: Many cases of malicious fake COVID-19-related Android applications have been reported. Installing these apps give attackers access to smartphone data or a window to encrypt devices for ransom. 100,000 new COVID-19 web domains have been registered, which should be treated with suspicion, even though not all of them are malicious. To work remotely, employees are using different kinds of tools. With all these tools they are increasing their exposure of the digital attack surface. CISA (Cybersecurity and Infrastructure Security Agency) has just issued an alert regarding vulnerabilities caused by remote access to organizations’ computer systems. A proliferation of cloud-based apps makes it easier for bad actors to exploit holes in networks. Example - Zoom Security Vulnerabilities: As the coronavirus pandemic forced millions of people to stay home over the past few days, Zoom suddenly became the video meeting service of choice: Daily meeting participants on the platform surged from 10 million in December to 200 million in March. This surge in users have also got the cyber criminal’s attention. Recently (16th Apr), 2 new massive Zoom hacks were uncovered. In one incident, a security researcher found a way to access -- and download -- a company's videos previously recorded in the cloud through an unsecured link. The researcher also discovered that previously recorded user videos may live on in the cloud for hours, even after being deleted by the user. Even the login credentials of Zoom users are being sold in the dark web. It’s not only Zoom, there are many other apps being used by a remote workforce that are putting organizations at risk. It’s important for businesses and employees to know and follow cybersecurity basics/hygiene. All organizations should practice a good cyber hygiene and ensure that their governance and enterprise risk management (policies, procedures, and controls) is effective and is enforced appropriately for the remote workforce. Given below is the list of tips which will help organizations strengthen their security hygiene and be prepared for challenges and risks from COVID-19 cyber-attacks: Organizations can enforce this checklist issued by INTERPOL as shown in picture below to their employees who are working from home. We have divided our cyber security tips into three sections: Precaution, Identification, and Action. 1. Precaution: As rightly said, prevention is better than cure. This is true for cyber security as well, especially in the current scenario. Employees should know the do’s and don’ts while working remotely. 2. Identification: There are certain ways by which employees can identify the social engineering attacks (if an email, link, or attachment is malicious or not). Organizations should communicate and train their employees so that they can differentiate between malicious and authorized emails, links, attachments etc. This will help organizations to secure their employees from most of the traps used by cybercriminals. Spoofing email signals: These email address look like original email addresses of an authorized entity -- slight character changes that make email addresses appear visually accurate — a .com domain where it should be .gov, for example. To handle this situation, before opening or clicking on any link in an email, look for the slight changes in email addresses in the “From:” and “To:” sections. 3. Action: Once the user is skeptical and suspects a malicious attack vector, they should responsibly escalate to the concerned team so that team can communicate the threat to other employees and save them from the trap of cyber criminals. This step is very important because not everybody in an organization is equally aware of cybersecurity, and the weakest link can help cyber criminals to breach an organization. Also, security teams should be deploy tested and robust tools and technologies used to make sure that employees are secure from most of the attack vectors. Implement multifactor authentication for VPN access, IP address whitelisting, limits on remote desktop protocol (RDP) access and added scrutiny of remote network connections. And keep patching all access management software on a timely basis In conclusion, just following this checklist is not enough. Organizations should make sure that cybersecurity best practice are inbuilt in their culture. It’s the entire organization’s responsibility to fight against cyber criminals. During this crisis, besides cybersecurity, organizations are grappling with many other kinds of risks like workforce effectiveness, other operational risks, third party and vendor risks, supply chain risks etc. Organizations must monitor all these risks and make sure they are identified and managed without impacting business performance or corporate reputation. At MetricStream we understand the challenges organizations are facing in this pandemic and have launched a COVID-19 solution to help organizations stay resilient through this crisis. With this solution, organizations will have the ability to manage information, processes, and responses, and take better, real-time decisions that impact employees, business leaders, customers, vendors and partners.
<urn:uuid:eb55cc70-62f2-4409-997c-35e22ca364a5>
CC-MAIN-2022-40
https://www.metricstream.com/insights/cybersecurity-risks-covid-era.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00403.warc.gz
en
0.95356
1,757
2.6875
3
NSF Connects Scientists in Antarctica NSF faces multiple challenges as it provides communications and IT services to three research centers, two research vessels and field camps in Antarctica, where scientists from educational institutions and multiple federal agencies conduct research on biology, climate change, geology and astrophysics. On a basic level, many consumer electronics are not designed to work below freezing. On Earth’s coldest continent, winter temperatures can drop to 80 below or worse. If devices are left out too long, circuit boards can shrink and components can freeze and fail; the dry-as-a-desert air also makes static electricity an ongoing problem. Limited bandwidth is an equally serious issue. Because relatively few people need the service, companies don’t find it cost-effective to offer much satellite service, and running cable would cost hundreds of millions of dollars. But in 2010, NSF and the National Oceanic and Atmospheric Administration convinced an Australian satellite operator to provide internet service. The NSF’s share of the bandwidth was 18 megabits per second for download speeds and 10Mbps for upload speeds. That’s not blazing fast for McMurdo Station, the largest of the three U.S. research centers, which houses on average 900 to 1,000 bandwidth-hungry researchers and staff from NSF, NOAA and NASA during the peak of the austral summer operating season. “We have a network link that is essentially the same speed that a family would get in rural America,” says Smith of NSF’s Office of Polar Programs. “We do all the things we are supposed to do: filter content and block streaming video,” he says. How NSF Prioritizes Network Traffic in Antarctica Internal IT traffic gets top priority at McMurdo. NSF has hired a contractor to manage day-to-day IT operations, including overseeing a large data center with Hewlett Packard Enterprise DL380 servers and a network made up of Cisco routers and switches. When onsite IT staff can’t fix a problem, the contractor’s IT staff at its Colorado data center remotely connects to networking equipment and servers to troubleshoot. Real-time traffic, which includes Cisco Voice over IP and Skype videoconferencing, gets second priority. While the NSF has locked down the Skype protocol to protect bandwidth, researchers share one Skype session for science operations and educational outreach with K–12 schools and colleges.
<urn:uuid:045c09d1-ff47-4b75-9d92-541b54c5964a>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2018/11/how-nsf-nasa-state-keep-tech-working-far-flung-employees
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00603.warc.gz
en
0.926613
509
2.984375
3
Covering a large area with WiFi often will require multiple access points. Unfortunately it’s not as simple as just placing a wireless router on every floor or in areas where you want a stronger signal. There are best practices that need to be followed when deploying multiple wireless access points on the same network to make sure that you are creating a solution to your problems rather than just creating an even bigger mess. Overlapping WiFi access points can create issues on your network. These issues are just as bad as not having enough wireless access points on your network. It is important to understand and follow the recommended best practices to make sure you get it right the first time. Almost all technology, including WiFi, is pretty black and white and leaves very little to no room at all for interpretation. There are right ways do do things and there are wrong ways, with almost nothing in between. WiFi uses radio signal on the 2.4ghz and 5ghz band to extend network connectivity to end user device. Both of these frequencies have a limited range before beginning to experience dissipation and subsequently a negative internet experience for end users. Structures such as walls, staircases, elevators, metal ducts and even our own bodies weaken WiFi signals at astonishing rates. (We recommend Ubiquiti Dual-Radio PRO Access Point for most home WiFi networks) Have you ever wondered why your WiFi quality decreases drastically when you move from room to room or outside of your office or house? It is because the building materials between you and your access point are wreaking havoc on your wireless signal. If it’s confusing, below is a quick primer on how to measure WiFi signal and how certain building materials will impact it. Understand WiFi Signal Strength & Impact Of Building Materials There is a free app on the Apple and Android app stores called WiFi Analyzer. Download this app to measure the strength of your WiFi signal in your office or home. WiFi signal strength is measured in milliwatts but as that measurement can get a bit complex with very large, confusing numbers. We instead elect to use dBm (decibels relative to a milliwatt). A high dBm is -30 and this would translate to excellent signal strength, while a low dBM is -120 which is essentially a dead zone. You would want your WiFi signal to be between -60 dBm and -40 dBm. In many commercial instances the standard acceptable signal strength would be -50 dBm and above. Let’s say you’re standing right by your access point and your signal strength is -40 dBm and you start moving past various structures in your office building. You’ll notice that your WiFi signal is gradually worsening and sometimes dramatically when you pass certain walls or structures Your WiFi Network Can Be Slightly Impacted By Glass Many offices choose to set up glass walls to separate conference rooms or individual offices. This modern aesthetic looks awesome but has a negative impact on wireless signal propagation. A standard clear window will reduce signal strength by about -4db. This number increases if the windows are double or triple paned or are treated to deflect light or insulate. If you conduct video conferences over WiFi in your glass-walled conference room and are already dealing with weak WiFi signals, the -4db reduction could cause huge issues. Sheetrock & Insulation Will Further Reduce Your WiFi Signal Strength Most walls are made out of sheetrock and in and of itself it does not have a massive effect on WiFi signals, only about -2db. You’re not out of the wood yet as materials between the sheetrock such as foam insulation, metal wiring or other structures can reduce signal strength even further. Wood & WiFi Do Not Get Along Wood makes up the majority of framework for homes and older office buildings. Besides being in the walls, wood can often be used for flooring, doors, and furniture within the house. Wood can reduce WiFi signal strength by -6db and that number only gets worse depending on the thickness of the wood or its water content. Thicker wood can reduce WiFi signal strength by -20db or more. WiFi Will Not Be Stable With Just One Brick Wall Brick is typically dense and thick, and WiFi signal usually become unusable with just one brick wall between an access point and the end user device. Brick reduces WiFi signal strength by an astonishing -28db! The mortar that holds the brick together doesn’t help either. Metal Obstructs WiFi Signals & Creates Serious Dead Zones Most modern commercial buildings are comprised of metal building materials. Metal does not get along with cellular signals so naturally, it is an enemy of WiFi as well. WiFi signals can be reduced by metal by almost -50db! It is one of the most important elements to plan around when you want to set up multiple access points on the same wireless network. Even in a perfect wide open environment with none of the structures mentioned above, there can still be elements that will impact your wireless performance and require you to set up multiple APs on your wireless network. Outside Interference From Nearby WiFi Networks Negatively Impacts Yours The signal from nearby wireless networks and access points can impact performance on your network. Access points on the same channel can affect your network performance and cause dropped connections or lost packets while using the internet. Not All WiFi Access Points Are Made To Support Densely Populated Areas Human bodies are the worst when it comes to WiFi signals. Our bodies are made of water and radio signals hate water. With a lot of bodies in one area, like what you would find in an open office setting, multiple access points are a must to make sure signals can spread properly and that the required capacities can be supported. Multiple Wireless Access Points Best Practices It’s clear that there can be many reasons why you would need to set up multiple wireless access points on the same network. The best step you can take is to hire a WiFi service provider like Made By WiFi to do the job for you but if you elect to take on the project yourself here are some best practices for you to follow. 1. Conduct a wireless site survey before setting up your WiFi A best practice when setting up multiple wireless access points in any situation is to conduct a wireless site survey. The wireless site survey essentially takes all the guesswork out of your WiFi setup and gives you a clear plan of attack on where you should mount your access points. Conducting a site survey also can help you figure out how you should configure your APs for the most optimal performance. Without a site survey, you are essentially going into your wireless network installation blind and may create issues with overlapping access points and misconfigurations. 2. Use a controller to manage all of your access points Wireless access point controllers come in many versions. Some are on-site controllers that live wherever your access points are deployed. Other controllers can be cloud-based and can be used to manage multiple access points across various separate locations. Controllers software can also live right on the access point itself. Regardless of what type of controller you use, the benefits are clear – the controller will let you control all of your grouped access points through one interface. You will be able to assign a single SSID and password to all of your access points instead of having to join different networks every time you move between rooms or floors. The controller helps to keep order within your network as well. From automatic channel management to seamless roaming, the controller will make all the difference if you need to set up multiple access points on the same network. (Our recommendation is Cisco Air 2504 Wireless LAN Controller ) 3. Choose the correct access point placement The wireless site survey should help you figure out the ideal locations to place your access points but perhaps you skipped that step and are winging it. The old method of placing access points is to position them in the center of every room where WiFi is needed. This approach could work but is definitely not the smartest method of installing access points, especially if your business heavily depends on WiFi to conduct its day to day operations. The smarter approach is to place access points in the areas where WiFi will be used the most. High-density areas which require strong wireless signals should be addressed first and remaining areas where wireless coverage may not be as important should be addressed after. This approach favors capacity over coverage and is the trend that wireless network installations are moving in. Check out some tips for professional wireless access point installation in our prior blog post. 4. Keep ethernet cable under 328 feet when connecting access points Now that you’ve conducted your wireless survey and figured out where to mount your access points you will need to run Cat5 or Cat 6 ethernet cable to the access points. It’s very important that you keep these cable runs under 328 feet or else you risk dropping packets which will negatively impact wireless internet performance. In fact, most wireless networking professionals limit cable runs to about 300 feet in order to accommodate the extra couple of dozen feet that may be needed for patching. If you do need to run a cable past 328 feet you’re not completely out of luck. You simply need an active component like a small, inexpensive switch that will live somewhere before the 300-foot mark and allow you to extend the cable run by another 328 feet. There are situations where you may need to run a cable for an even longer distance. In this case, you may elect to use fiber optic cable which can be run for several miles in some instances. It’s important to review the costs associated with these cable runs as well as they can get very pricey as the distances increase. 5. Use the correct access points for indoor and outdoor use Your WiFi network may require some outdoor coverage and in instances like this, you should elect to use outdoor access points to get the job done. It’s possible that wireless signal from your indoor access points may reach your outdoor areas and could be sufficient for your needs but if not, additional access points may be required. Outdoor access points are able to ignore the elements and are resilient to most weather conditions and temperatures. We recommend Ruckus Wireless ZoneFlex T300 for outdoor use. There may even be instances where you would use outdoor access points indoors such as refrigerated warehouses where temperatures are regularly kept below the freezing mark. Another benefit of outdoor access points is that they are condensation resistant and have internal heaters that allow them to withstand situations where indoor access points would have failed. 6. Pick the correct channels for your access points Proper channel selection is essential for excellent wireless coverage. In most cases, you’ll want the access point controller to pick the correct channels for you. After all, this is what the controller is there for. When deploying multiple access points on the same network there is a chance that the coverage area from neighboring access points may overlap. Nothing wrong with this as long as the access points use non-overlapping channels. If the channels do overlap it could cause a situation where their access points interfere with each other. This can cause packet loss during browsing and a negative internet experience for those using your WiFi network. If your access points are broadcasting on the 2.4 GHz band you have 11 channels that can be used and this provides for only three non-overlapping channels: 1,6, and 11. Because of this, the 2.4 GHz band is rarely used for high-density WiFi deployments. The 5 GHz band offers much more selection when it comes to non-overlapping channels and is the de facto favorite for high-density wireless deployments utilizing multiple access points on the same WiFi network. 7. Pick the correct power settings for access points Access point power settings help dictate how large a coverage area each access point is responsible for. If coverage cells are too large and have major overlap with each other, it can cause roaming issue where devices stay stuck to access points that are further away even though there are access points nearby that have a stronger signal. By utilizing a controller, it’s likely that the power levels of the access points will be controlled automatically but in certain high-density deployments, some human intervention may be required in order to perform manual power selection. This is where the findings from your site survey come in handy as you’ll have the information you’ll need to fine tune the access points to your unique network environment. Improve Your Multiple Access Point WiFi Network There are many reasons why you would need to use multiple access points on your wireless network. Whether you’re trying to enhance coverage within your office space or support a larger capacity of devices, the 7 best practices in this blog will help you figure out the smartest approach to doing so. If you still need help, contact the wireless specialists at Made By WiFi for a professional wireless network consultation.
<urn:uuid:833f2e1e-9c19-401a-b6ce-e4ba8eee83ae>
CC-MAIN-2022-40
https://www.madebywifi.com/blog/multiple-wifi-aps-on-the-same-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00603.warc.gz
en
0.939966
2,632
2.765625
3
A financial analyst is an individual, undertaking financial analysis as a primary part of his or her employment. Some analysts are responsible for conducting business and financial functions for corporations while other analysts focus on providing advice to private sector and government agencies. They have the responsibility of providing analysis on business finances and making recommendations based on their own expertise and research. Financial analysts may work independently, but they may also work as an employee of a bank, an investment or insurance firm, a public corporation, or other organizations. Financial analysts use mathematical and analytical methods to determine the probability of various scenarios or the potential revenue for a company. They make use of both mathematical and non-mathematical procedures in order to arrive at their conclusions. A typical analyst will make recommendations about the best method of using these financial techniques in order to increase revenue and profitability of a company. Analytical analysis is critical to determining the value of a stock or financial instrument, and the financial analyst’s recommendation may be the difference between success and failure. There are many types of analysis that an analyst can perform. The most popular among analysts is the financial statement analysis. This is used by analysts in order to determine how well the accounting records of a company to reflect its true financial health. Financial statements help investors, banks, lenders, and other financial institutions make accurate judgments about the ability of a company to pay off its debt. Financial analysts also use statistical methods to determine the value of different financial instruments. They use historical data to show how companies have performed historically and to see which of these companies would likely be profitable based on their previous performance. In addition, they use several different financial techniques, such as leverage and price and return analyses. By using these techniques, they are able to determine how much money a company is likely to earn in the future, and what its worth will be at that point in time. Using these methods, financial analysts can come up with an accurate value estimate of a company, which helps them arrive at their predictions of what the value of a stock or a bond might be. Stock analysts will study the market and analyze the trends of a company. They will analyze the characteristics of a company and its past and future performance. This helps them determine whether the company has a good chance of being successful. Financial analysts also study the business and financial history of a company. in order to find out what the company has done in the past and how it is doing now. This helps them to make projections about the future performance of the company. Analysts look at several different aspects of a company, including its past and present situation, its current assets, liabilities, and its future assets. They will compare these numbers to the company’s revenues and expenses to determine the value of a company. They also look to see if the company has any tax liens against it. All of these elements are necessary to come up with an accurate value estimate of a company. This process helps them arrive at an accurate value for a company and helps them give accurate recommendations for the value of a company. Financial analysts can also help with financial planning for a company. They can provide advice on ways that the company can improve its efficiency and effectiveness, and ways that it can reduce the amount of money that it needs to borrow and invest in its future growth. In the long run, their advice can help the company reduce its risk and increase its profit, which can result in the company making more money and being more efficient and profitable.
<urn:uuid:84ce68d0-33b3-4a93-a70e-4de5b8bcc8cd>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/what-does-a-financial-analyst-do/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00603.warc.gz
en
0.966247
704
3.53125
4
The rise of big data has become a significant turning point in biomedical research. Twenty years ago, it cost $100 million to sequence a single human genome. With the advent of high throughput sequencing technologies, sequencing the human genome has become faster and much more cost-effective — average costs today are closer to $1,000 — and has opened up new doors for targeted diagnostics and therapies. The mass of big data flowing in from all sources — from whole genome and whole exome sequencing and analysis to single-cell RNA sequencing, to RNA seq analysis — has changed the way academic researchers and biotechnology companies approach the study of cancer and rare genetic diseases, for example. Coupled with advances in artificial intelligence and machine learning, it has made the analysis of highly diverse biomedical datasets possible. Big data and AI in the identification of driver mutations AI has not only sped up the processing and analysis of biomedical data, but it also makes the analysis of highly complex sets of data possible on a level that could not be achieved with manual processing. One of the biggest challenges of integrating big data analytics into genomic analytics is the elimination of unnecessary noise. For example, next-generation sequencing (NGS) technologies like whole-genome sequencing (WGS) and whole-exome sequencing (WES) often generate an extensive list of thousands of variants. DNA-seq and RNA seq analysis generally reveal that a majority of these variants are benign, but any rare mutation needs to be treated as potentially pathogenic. Available academic tools can winnow out benign variants on the basis of minor allele frequency, segregation, text-mining, genotype quality, dbSNP data, and predicted pathogenicity. However, none of these tools can define the causative mutations of a patient’s phenotype by the method of elimination. The identification of driver mutations always demands additional investigation, including the use of external databases, and the determination of common rare variants among patients with similar diseases. Big data in characterizing and categorizing rare and deadly diseases Big data has influenced the way researchers categorize cancers. There was a time when “lung cancer” or “kidney cancer” were perfectly acceptable diagnoses. Today, scientists and oncologists understand that lung cancer or kidney cancer can refer to several different diseases, each of which arises from distinct mutations. Identifying mutations in tumor cells is no longer difficult. Running a complete DNA or RNA seq analysis on the isolated nuclear matter is not challenging either. However, telling apart the disease-causing mutations from the non-driver mutations found in the tumor cells can be a challenge. Comparative analysis using the biomedical dataset on de novo mutations, SNPs at the disease site, and CNV data promise new techniques for the determination of driver mutations for the different types of cancers. The distinction of one kind of cancer from another based on their driving mutation or differences in molecular mechanisms is opening new windows for personalized treatment, precision pharmacology, and targeted therapy. Cancer classification and increased survival rates In the last couple of years, the big data approach to cancer research has changed how doctors describe non-small-cell lung carcinoma (NSCLC). It is now categorized by the predominant mutation found in NSCLC cells and not by the organ or tissue affected by the disease. This approach of using DNA and RNA seq analysis to categorize cancer according to its driver mutations rather than the organs or tissues it affects has enhanced the chances of survival and improved the prognosis for hundreds suffering from cancers caused by rare genetic mutations. Using treatments that target specific mutations in a single gene can help reduce the chances of treatment failure and other side-effects people associate with chemotherapy. Big data analytics and discovery of other driver mutation for genetic diseases An information-rich approach is not only helping in the diagnosis and treatment of cancer, but it is also helping the scientific community unravel the mystery of autism genetics. In a large study on autism, biomedical data from over 600 families were studied using RNA seq analysis. Participants were children diagnosed with autism with unaffected parents and siblings. The study showed that there are hundreds of genes at play, but six significant candidates became the focus of several research groups working on the genetics of autism. In 2014, another similar study resulted in the discovery of 27 genes with rare de novo mutations in those diagnosed with autism. The year 2016 saw a breakthrough in the study of autism genetics when collaborative research combined data on de novo mutations with data on inherited mutations and CNV data. The Autism Genome Project played a significant role in the subsequent discovery of the 65 genes now linked with autism, and the 6 CNVs now considered the driving mutations. The study further went on to confidently identify 28 “autism genes” that will undoubtedly make the diagnosis of anyone with the charted mutations easier and faster in the near future. What does big data analytics hold for the future of diagnostics and treatment of genetic diseases? Whether it is autism genetics or cancer genetics, scientists are finally acquiring the sequencing tools, analytics algorithms, expansive datasets, and robust models necessary for searching beyond the exome. To date, most studies have focused on SNPs and mutations that occur within the exome, leaving around 98% of the genome unexplored. With big data analytics and AI, the scientific community is seeing new, powerful tools at its disposal that can aid in the identification, study, and targeting of disease-causing mutations.
<urn:uuid:8483c8d2-e0e4-4860-af18-c274c7ea3a64>
CC-MAIN-2022-40
https://www.crayondata.com/5-ways-big-data-disrupted-research-mutation-identification-causes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00603.warc.gz
en
0.941326
1,109
3.015625
3
Much has been said over the last few years about the loss of American jobs to other countries. The fact is that China and India, as well as other cheap labor countries, have taken a good number of jobs. Many of these jobs will likely never return. I wrote a 2006 article for the E-Commerce Times titled “Why Money Chases Cheap Labor — The Outsourcing Phenomenon.” The article attempted to explain why industry will probably always follow the path of least resistance when trying to lower the costs of manufacturing. The reason for this is that industry has to be competitive in order to remain viable. Is there a way, however, by which industry can stay competitive — especially with its labor costs — without outsourcing jobs? There is, and the answer is robotics. Robots have come a long way. There was a time when robots were mainly used in the auto industry. Robots were then used to do the heavy and dangerous lifting. Today, we have smart robots — robots that can be quickly trained to do many tasks. Some of these tasks require mind-numbing, repetitive actions; others might require some intricate and delicate operations. A leader in the field of modern-day robotics is a Massachusette company called Rethink Robotics. Several venture capital companies, including Bezos Expeditions — the personal investment company of Jeff Bezos — fund it. Rethink Robotics has succeeded in giving robots a “brain.” Line personnel can now quickly train these robots to perform a wide variety of tasks. Why is this so important? Factory personnel previously performed these tasks. If the tasks were at all repetitive or simple, there was a great risk that the line personnel’s jobs could be outsourced to another country that had inexpensive labor to perform such tasks. Now, because of smart robots, factory personnel can focus on training the robots and supervising what they are doing. Why Won’t Robots Spur Greater US Job Losses? As I see it, if the robots are manufactured in America, as Rethink Robotics is doing, the country will have created a relatively new industry: The manufacture of smart robots. This production helps prevent the outsourcing of jobs by creating new jobs, and by recapturing jobs that were lost to countries like China and India. Rethink Robotics and other companies can successfully bring manufacturing back to the U.S., as long as they can make quality robots that will assemble products efficiently, consistently and in a cost-effective manner. If you are an American manufacturer, why would you outsource the assembly of your product in order to garner the low labor costs of a foreign country, when you can purchase robots that can do the same job as good or better than foreign workers, and at a similar or lower price? It just wouldn’t make sense. Besides the attraction of lower manufacturing costs, having American-made robots make a product in the U.S. would save on shipping a product from a country that might be more than 5,000 miles away. Repetitive Jobs Can Ideally Be Handled by Robots There are so many jobs being done throughout the world that are mind-boggling in their repetitiveness, not to mention the danger involved in some of them. There are smart robots being manufactured in the U.S., however, that can perform a wide variety of repetitive and dangerous jobs in a safe and inexpensive fashion. The number of jobs that a robot can perform is quite extensive. There is packing and unpacking, sorting, inspecting, transferring certain products to different assembly lines, rejecting defective products, weighing products, alerting management as to certain recurring product defects … the list is endless. Robots can do many jobs now performed by people with minimum training and experience. So long as these jobs can be performed efficiently and economically, it is inevitable that robots in the not-too-distant future will take these jobs away from workers. In many cases this will mean that workers will be freed to perform more complicated tasks. Unfortunately, in other cases it will mean those workers will be displaced. Why Robotics Will Be a Net Gain for the US Some will certainly worry that robots will take jobs away from American workers. One has to reflect, however, on how many jobs have already gone to foreign countries because the cost of labor is cheaper there. Those jobs will never return — except through robotic manufacturing in the U.S. Robotic manufacturing, when it brings jobs back to America, will create a net gain. The robots first have to be manufactured, presumably in the U.S.; then plants will have to be built to accommodate for the manufacture of the repatriated products; then personnel will have to be hired to supervise and manage the plants that are creating these products. Times inevitably change. Who would have thought that we’d be talking about smart robots? It was something that we’d only seen in science fiction. Well, the fiction has become today’s reality, and we are faced with the dilemma and the opportunities that the manufacture and placement of smart robots bring with them. America will maintain its world economic dominance because it is consistently on the leading edge of technology. With companies like Rethink Robotics that have created a whole new concept as to what is possible, the U.S. can’t help but maintain and advance its economic leadership in the world. Who knows, someday we might even have our own personal robot at home that can perform a plethora of boring tasks, like cleaning the house, gathering and laundering our clothes, monitoring the safety of our home, etc. We’ll just have to snap our fingers and say, “Hey R2-D2, fetch me …” Something I’d never say to my wife!
<urn:uuid:29c7fb7a-ff5d-4a69-ab72-05b612d471f3>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/robotics-revitalizing-american-manufacturing-77325.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00603.warc.gz
en
0.968829
1,191
2.609375
3
What is scalability? Scalability refers to the principle of a system that can raise the load of each application or part of the infrastructure. For example, assume that a prominent web page like ProductHunt has your online application. Suddenly, thousands of visitors use your app – can you manage your infrastructure? With a scalable web application, it can scale and not crash to accommodate the traffic. Crashing (or sluggish) websites are not satisfying for your consumers and your software is reputable. Systems have four main domains which may be covered by scalability: - Disk I/O - Network I/O When you speak of cloud computing scalability, two primary modes of scaling – horizontal or vertical – are commonly discussed. Let’s go into those terms in greater depth. Why is cloud scalable? Virtualization allows a scalable cloud architecture. In contrast to actual computers which have relatively high resources and performance, VMs are very flexible and can simply be up or downgraded. You can relocate to another server or host it simultaneously on a number of servers; workloads and programmes can be transferred to bigger VMs if necessary. Cloud providers from third parties also have a wide range of hardware and software resources in place to quickly scal the cost-effective situation of particular businesses. Three Different Types of Scalability in cloud computing There are three different types of cloud scalability – vertical, horizontal, and diagonal. Simple enough to remember, right? Now let’s examine what each of these means: Horizontal scaling improves storage and administration capabilities. Scaling out/in is another name for horizontal scaling. Horizontal scaling operates by adding regular infrastructure nodes. The increase in the nodes controls the growing volume of work and reduces latency. This scaling mode may be regarded as a vertical and horizontal scaling combo. With diagonal scaling, you may provide extra resources in particular time situations according to your demands. When traffic increases, the needs will be satisfied; the configuration will be normal once traffic drops. Also called up/down scaling. You can add resources to your operations in this scalability mode to manage an increased burden. It is only by switching to larger VM or adding expansion units that the code must be changed; One remarkable aspect of this kind of scaling is that since your computer ability does not expand depending on the size, performance may be decreased. Benefits of Cloud Scaling Architecture using cloud computers has a wide variety of advantages for scalability. Examples include: 1. Time, attention and limited resources are not needed for procurement, delivery, configuration and testing. 2. This helps to manage the costs more efficiently because only resources are paid for and used for the period. 3. When you need them, you may develop additional applications and environments (referred to as side-by-side scaling). For example, by simply replicating the current instance, you may provide new DEV, TEST, QA instances. 4. Advanced deployment strategies such as Blue and Green, canary launching and even creation of distinct production environments with live audiences can be used for market testing. A practical example: Containers Containers are a fantastic example of a highly scalable cloud infrastructure: they enable both horizontal and vertical scaling and are presently the most innovative and effective cloud solution. When calculating power increases, containerized systems immediately identify the requirement and a decline is requested, so that the system is as efficient as possible without waste. Best practices for using scalability in cloud computing: Not every programme can operate as intended when it’s scaled. This requires clearly defined architectural design patterns, such distributed queues, stateslessness, scalable storage requirements, Pub-Sub and so on. Have a robust testing strategy Make sure that you can test your apps’ scalability and settings. The place to test this is not real commercial transactions that contribute to income. Enhance monitoring self-scaling ability Auto-scaling solutions are available from most cloud providers. This enables us to properly manage the necessary resources, as required. However, I say ‘with supervision,’ which means that appropriate monitoring will guarantee the need for resources. When you have a traffic-demanding application, auto-scaling can provide you with the resources needed automatically. But how are we aware that this is not a form of service denial attack? You may use rules to control your auto-scaling demands using schedule based policies if you know you will notice the rise over a certain period of the day, week, and month. Use load balancers It is vital to have load balancers at the front that collect incoming traffic and handle the load distribution via all server sizes. Cloud-based companies do not forever remain the same. You use the cloud to expand your business. Scalability is one of the key ideas that architects need to grasp in order to be most effective. It’s an envelope! This post gives you all the information you need about scalability, how it impacts systems and applications, their advantages and safeguards, how your scalability databases are optimised and how many cloud Internet Services uses scalability.
<urn:uuid:8e8ab7e7-0924-4560-90e3-d29f06b95858>
CC-MAIN-2022-40
https://www.finsliqblog.com/cloud-computing/what-is-scalability-in-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00603.warc.gz
en
0.912505
1,074
2.640625
3
Despite a positive (and significant) decrease from over 4 million unfilled cybersecurity jobs in 2019, there is still a staggering 3.12 million global shortage of workers with cybersecurity skills. You may find this somewhat inevitable, given that IT innovation changes things so quickly and business will always, as a result, be playing catch up. However, I argue that we have the tools to tackle the gap and might have done so already were it not for our grave misunderstanding of the challenge. Many thought leaders have approached the skills shortage from a cumulative perspective. They ask “How on Earth can companies afford to keep re-training their teams for the latest cyber-threats?” The challenge, to them, emanates from the impracticalities of entry level training becoming obsolete as new challenges emerge. Of course, the question of ongoing training is very important, but I believe it has misled us in our evaluation of the growing disparity between the supply and demand of cyber-professionals. What we should be asking is “How can we create a generation of cyber-professionals with improved digital skills and resilience to tackle an enemy that continually mutates?” Defining the relationship between people and tech is of the utmost importance here. Cybersecurity is not merely a technical problem, it’s a human problem. This is a critical intersection. People are not the weakest link in an effective cybersecurity defense strategy, but the most crucial. However, technology is the apparatus that can properly arm us with the skills to defend against attacks. The silver bullet The only thing we can be certain of is that cyberattacks are taking place right now and will continue to take place for the foreseeable future. As a result, cybersecurity will remain one of the most critical elements for maintaining operations in any organization. There is a growing appetite for reform in cybersecurity training, particularly among higher education institutions (e.g., with the UK’s top universities now offering National Cyber Security Centre (NCSC) certified Bachelor’s and Master’s programs. It is in the interest of the British government that this appetite continues to grow, as the Department for Culture, Media & Sport reported there were nearly 400,000 cybersecurity-related job postings from 2017-2020. In addition, COVID-19 has been a significant catalyst in increasing uptake and emphasis on cyber skills since the steep rise in the use of digital platforms in both our work and personal lives has expanded the surface area for attacks and created more vulnerability. Overall, though, young people remain our best hope for tackling the global cyber skills gap, and only by presenting cybersecurity to them as a viable career option can we start to address it. This is the critical starting point. Once we do this, the next important step is to give universities and schools the facilities to offer sophisticated cyber training. Empowering the next generation If we’re being honest, professors and CTOs are often concerned with providing their students and employees with a theoretical understanding of cybersecurity; that is, what the motives behind attacks might be, the means they use to carry out attacks, and the potential losses involved. While this provides a great theoretical background for cyber-training and may encourage vigilance, it is not always helpful in practical terms. By encouraging young people to take up courses in computer science or cybersecurity, whilst also supporting their learning via military and enterprise-grade platforms, the next generation of professionals will be well equipped to enter the workforce. Giving young people access to the best resources in the field is the only way to ensure they will play an active part in closing the skills gap. The standardization of cyber training practices for teens right through to experienced consultants will empower workers of all calibers to take an active role in reformulating their own organizations’ training strategy, strengthening it and enabling seamless integration between teams. Cyber range technology has emerged as the frontrunner when it comes to inciting this kind of bottom-up stability in cybersecurity. Cyber range technology enables the user – be that a university, business, or government – to generate a realistic, capable and credible virtual environment which requires trainees to respond to cyber-attack simulations in real-time. Within the simulated network, users learn to cope under high levels of stress, locating and exploiting vulnerabilities on various network systems. This helps them develop the skills to identify, monitor and resist cyber-attacks. Cyber ranges can mimic your IT systems and provide sophisticated training in the form of task-driven Capture-The-Flag (CTFs), live-fire exercises, or a combination of both (threat hunting). They are available in open-source, and can be deployed quickly through the cloud, making roll-out to anywhere in the world a smooth process. This technology is already the gold standard for governments, but its real disruptive capability lies in its deployment to higher-education institutions and even high schools. Here, students can hone their skills and prepare for tackling real cyber-attacks. Cyber skills gap: Simplifying the problem The key to solving the cyber skills gap lies in mobilizing the next generation of (already) tech-savvy young people, and simply shifting our focus towards helping them develop cyber-skills before they enter the workplace. By taking a two-pronged approach, and bringing together a change in focus, supported by the newest and most sophisticated technology on the market, we can start to implement a real, viable strategy for tackling this immense challenge – before it’s too late.
<urn:uuid:0b104c54-68e4-4a2a-8a73-f6d40d8693a2>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/06/08/reformulating-cyber-skills-gap/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00603.warc.gz
en
0.946766
1,122
2.53125
3
Not All Clouds Are Created Equal Remember when you struggled to stay awake way back in Earth Science class learning about the different types of clouds floating above the Earth? Few of us can probably remember the distinguishing characteristics of cirrus, cumulus and stratus clouds, but our teachers tried their best to make us aware of them. There’s definitely something similar occurring among small and medium-sized business owners when it comes to an awareness of cloud computing and what it means as a concept and, more importantly, to their businesses. In short, there is some understanding of what the cloud is, but some additional information would be welcome. As a leading provider of IT support in the DC metro area, we frequently receive questions about what the cloud is and how best to use it. To sum it up succinctly, cloud computing means using an offsite server(s) to house any or all of your company’s email, file storage, online backup, accounting and business applications. In the cloud, these functions are accessible to users at any time through the use of an internet connection as opposed to being stored on a server and/or desktop at their office location. There are a variety of factors (cost, business size, type of business requirements) that determine whether using the cloud, partly or for all your IT needs, is right for your business. We will discuss these thoroughly in future posts. In this post, we will stick with introducing the three main types of clouds a business can use. In order to better understand the different types of clouds, it is helpful to first understand what an offsite data center is. Data centers are normally warehouse type buildings, which house a large number of racks that hold many servers and other hardware and networking equipment. The companies who own these data centers provide physical security, electric power, internet connections, a controlled temperature and moisture environment, and fire suppression systems to operate and protect this equipment. Data centers also have redundant systems for electric power and internet connections. Depending upon the company who owns the data center, they will either rent out space on their servers or provide physical space (along with the other attributes) for companies to house their own servers. Some data centers lease out dedicated servers and also provide server maintenance for a fee. This is the most basic type of cloud computing widely used by small and medium-sized businesses. Public cloud service providers store a company’s desired applications, email, files, and/or any other requirements on a shared server at their offsite location. Some of the leading public cloud service providers include Amazon Web Services (AWS), Microsoft Azure, IBM/Softlayer, Dropbox, Anchor, Egnyte, and Google Compute Engine. Your business and other companies share the use of the provider’s server(s) and have 24 hour access to your data through the internet. One analogy to help understand the public cloud is to think of life in an apartment complex. The apartment building is the offsite data center containing many shared servers represented by separate apartments and your apartment is one shared server, which you rent like an apartment. In your rented apartment you have your own private space for your things, but you also have a roommate(s) who shares your kitchen and other areas of the apartment when you are not using them. The difference is that unlike the real life situation when you share your apartment with a roommate, your company will not be aware of the other companies sharing the server where your company’s data is housed. In an apartment complex, the management company or landlord provides all the utilities (electricity, water, etc.), maintenance of the building and grounds, and security. Similarly, the data center provides the electricity, internet connection with redundancies to operate the servers that make your company’s and its other clients’ computing needs possible. They also provide physical security and firewalls to prevent security breaches and viruses from impacting your data. In the public cloud model, companies generally pay a subscription fee per user and save money and time by not having to worry about the cost to own and maintain a dedicated server at their location. Larger companies and companies that have significant security and compliance requirements are generally not a good fit for this type of cloud. The private cloud (also known as a corporate cloud or internal cloud) is similar to the public cloud in that it refers to a company’s use of a server(s) at an offsite location to store and provide access to all or a portion of a company’s desired applications through the internet. However, in the private cloud, the company has its own dedicated server (either privately owned or leased) at a data center or at their own offsite location. This is in contrast to the public cloud where the data from many companies is shared on the same server at the data center. Using the apartment building analogy, you own your private apartment in the building and do not share anything in your apartment. The private cloud option is used generally by companies to gain the benefits of using the cloud, while still maintaining control of their company data through the use of their own private data server. Companies that use this more expensive model are concerned with security and compliance and keeping their assets protected by their own firewall. The company does not have to worry about any viruses or security breaches coming from other companies on a shared server as their information is physically kept apart on their own dedicated server. The negative side to this approach is the cost of owning or leasing a server along with any maintenance expenses. Using the private cloud gives companies greater control of their data and more peace of mind. As the name suggests, some companies combine the public and private cloud models described above to use the hybrid cloud model. The most common example is when companies use the public cloud exclusively for email, while using their own dedicated server at their office (on-premise) or using the private cloud as described above for file storage. In this example, the company still has control of certain sensitive applications privately, while using the public cloud for their applications, which don’t have the same security or compliance requirements. Companies can also use the hybrid cloud model to reap benefits by migrating applications to the public cloud during natural disasters or scheduled server maintenance at their private cloud locations. For more specific blog posts about using the cloud for Email, file storage, and business applications, please click on the following links: Now that you know about the types of clouds out there, you can explore our related blogs at networkdepot.com to learn more about the latest developments in cloud computing and the potential benefits for your company. Feel free to contact us at any time to discuss cloud computing options for your company. And don’t forget to take a moment and review the three main types of clouds floating above your head. Think of it as a useful conversation starter with family, friends, or a stranger.
<urn:uuid:775d0436-05f9-44d1-8197-5a2edba1cf8e>
CC-MAIN-2022-40
https://www.networkdepot.com/save-thousands/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00603.warc.gz
en
0.949488
1,415
2.65625
3
The situation in Ukraine has the entire world on edge. For all of the horrors shown on the nightly news, there are unseen battles happening behind the scenes. State-sponsored hackers and cybersecurity personnel are in their own conflict. One primary attack vector? Malware. From financial assets to private personal information to applications and infrastructure, protecting data has never been more important. Public cloud’s ability to react quickly against one of the leading state-sponsored hacking entities in the world is remarkable. Cloud leaders like AWS and Microsoft are moving quickly to help protect Ukraine. Microsoft and Amazon Web Services (AWS) have both announced initiatives to help bolster Ukraine’s cybersecurity defenses. This piece will explore how Microsoft, AWS, and others are working to stop malware and protect Ukraine from bad actors. Bad Actors, DDoS, Malware, and More Russia’s invasion of Ukraine has the potential to impact organizations within and beyond the Eastern European region. This can include malicious cyber activity against the U.S. homeland from bad actors to phishing attempts, DDoS attacks, and potential malware. If any of those terms are unfamiliar, check out the helpful glossary below: - A bad actor is a cybersecurity adversary interested in attacking information technology systems. Bad actors will exploit flaws in the system which is why it’s important to update your operating system and applications – especially web browsers – regularly. - A distributed denial of service (DDoS) attack targets multiple compromised networks and systems. They wait for an IT department to be distracted before they strike and can cripple a website for days. - Malware is malicious software that destroys, damages, and/or steals information from a computer system. These can be Trojan horses, viruses, worms, etc. - A phishing scheme is when a link or webpage looks legitimate but is actually a trick designed by bad actors to have you reveal personally identifiable information such as your passwords, credit card numbers, or other sensitive information. This is why it’s important to think before clicking on links in an unrecognized email. More than 90% of successful cyber-attacks start with a phishing email. State Department Initiatives There are a number of efforts underway to help Ukraine address its cybersecurity challenges. The U.S. State Department has been working with the Ukrainian government to improve its cyber capabilities and has provided $10 million in funding for projects such as the development of a National Cybersecurity Strategy and the creation of a Cyber Coordination Center. The European Union has also been working with Ukraine to improve its cybersecurity posture. The public cloud is playing a critical role in the effort to stop the spread of malware and protect Ukraine from cyberattacks. Microsoft and AWS have both announced initiatives to help bolster Ukraine’s cybersecurity defenses. Microsoft’s efforts to thwart cyber-attacks Microsoft has been working with the Ukrainian government to provide technology and training to help secure the country’s networks and critical infrastructure. The company has also provided Windows 10 licenses to all government agencies in Ukraine. And, earlier this year, Microsoft unveiled a new cloud-based service called Azure Security Center for Ukraine. The service provides centralized threat management and enables customers in Ukraine to quickly identify and respond to threats. Microsoft’s agility and speed to help address critical security issues, particularly in the public cloud, show what is possible for the rest of Azure. Not long ago, Microsoft was able to detect and shut down a widespread malware attack in just a few short hours. Robust protective measures for a very specific use case – Ukrainian assets under attack from Russian cybercriminals – can be broadened and expanded to protect more of their public cloud users. If Microsoft can do that for something hyper-specific like the situation in Ukraine, then they can also immediately add protections from malware to Azure when necessary. How AWS has joined the fight AWS has also been working with the Ukrainian government to help improve its cybersecurity posture. In 2015, AWS launched an “AWS in the Ukraine” program to provide training and resources for developers, startups, and students. In January, AWS announced that it is expanding the availability of its GuardDuty service to more regions. The service, which is a cloud-based security monitoring solution, helps customers detect and respond to threats in near-real-time. In Ukraine, the availability of GuardDuty means companies can detect and address a rising tide of malware and DDoS attacks. With the expansion, GuardDuty will be available in all AWS Regions, including AWS GovCloud (US), which is specifically designed for US government customers and compliance requirements. The expansion of GuardDuty will also help Ukrainian organizations using AWS detect and respond to threats more quickly and effectively. How Other Companies Are Helping In addition to Microsoft and AWS, a number of other companies are working to help protect Ukraine from cyberattacks. Google Cloud expanded eligibility for Project Shield, which provides free, unlimited protection against Distributed Denial of Service (DDoS) attacks. It further ensures that vital information about aid, shelter, and evacuation procedures remains available to people at risk. This includes Ukrainian government websites and embassies worldwide, and other governments in close proximity to the conflict. Cisco has donated $1 million worth of Cisco technology to the country. The donation includes Cisco routers, switches, and security appliances. IBM has also been working with the Ukrainian government to improve its cybersecurity posture. The company has donated over $200,000 worth of software and services to help the country combat cybercrime. Strengthen Your Security Posture Companies like those above are making a big difference in fortifying Ukrainian cyberdefenses. Other organizations like Connectria also offer technology and capabilities to protect against malware, DDoS attacks, and other potential cyber intrusions. For example, Connectria can design a security solution around both public and private cloud environments for businesses of any size. Hardened web application firewalls (WAFs) and user authentication policies are but a small part of what our team can do. Having a strong security posture in advance of a threat is the best protection. Absent that, being able to respond quickly and with the best possible solution can still be enough to protect your most important cyber-assets. The situation in Ukraine is a reminder of how important it is for organizations to have robust cybersecurity defenses in place. Contact Connectria with any questions about cloud security or to find the right cloud solution for your business.
<urn:uuid:ecda05b6-9e1e-40d3-9276-7e2797fc04d5>
CC-MAIN-2022-40
https://www.connectria.com/blog/how-public-cloud-providers-are-supporting-ukraine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00003.warc.gz
en
0.939431
1,320
2.78125
3
Imagine for a moment, there were no standards for how devices communicate over a network. No error checking, no security measures, no flow control, or no standard data format. The results would be pure chaos. Networks would come to a grinding halt. Business would be unable to function. The network stack or computer network layers prevent this problem by defining a framework for interoperability between devices and software on a network. What is the Network Stack and why is it important? The network stack is a set of communications standards that computer and network hardware should follow so that they can communicate with other equipment attached to the network. This communication standard – the Operating Systems Interconnection (OSI) model – splits communication into seven layers stacked upon each other: - Physical layer - Data link Layer - Network layer - Transport layer - Session layer - Presentation layer - Application layer Each layer of the network stack in the OSI stack plays a specific role in how data is transferred from one device to another. Data flows up and down the stack in a “pass it along” manner. Meaning, one level receives data, performs its duty, and then passes it along to the next level. The general process is: An application on the sending device requests to send data to another node. The information starts at the application layer and flows down the stack until it reaches the physical layer, where it is then sent across the network. On the receiving end, the data starts at level one of the OSI stack. This data is then passed up the stack until it reaches Level 7. Although this is a simplified example, there is much more involved. The below information outlines exactly what happens at each layer of the OSI model. 1. Physical layer The physical layer is as it sounds. Some common items or technology found at this level include switches, category 5,6, and 7 cabling, repeaters, network adapters, and other items that are physically positioned to create the network. The role of the physical layer of the computer network is to send raw unstructured data across the physical connection. This unstructured data made of bits of 1s and 0s get assembled into packets. These packets are then sent across the cabling (or WiFi in a wireless network) to the receiving device. The receiving device disassembles the packets and sends them to the next level. 2. Data link layer The data link network layer is responsible for routing data between devices on the same network. Packets received at this level are further subdivided into frames and then forwarded to the destination computer based on the MAC (media access control) address information stored in the packet. Each item that connects to the Internet has a predefined or assigned MAC address. Network hardware communicates by MAC address at the data link layer. Error control also takes place at this level to detect any errors in transmission. Additionally, level 2 regulates flow control of data to not overwhelm the sender and receiver by sending too much data at once. Network Interface Cards (NIC) and device drivers are standard technologies found at this later of the OSI. 3. Network layer Similar to level 2, the network layer is responsible for routing data between two devices. However, the network layer transmits data between devices located on different networks. Whereas level 2 uses the MAC address to route the packets, level 3 uses the IP (Internet Protocol) address to route packets. Routers and layer 3 switches operate at this level of the OSI model. 4. Transport layer Packet delivery and error checking are the critical functions of the transport layer. At this level, the packets are subdivided into smaller sizes and sequenced together in the correct order. This level also handles flow control between the devices. There is no physical device associated with the transport layer. Instead, it uses TCP, which is a protocol found in the operating system. 5. Session layer The session layer is responsible for establishing a connection between devices, authenticating each device, and providing security during data transmission. Protocols used at this network layer include PPTP, RPC, RTCP, and SOCKS. 6. Presentation layer Level six of the OSI stack prepares data to be used by the application layer. Specifically, this level handles processes such as encryption, decryption, and decompression. 7. Application layer Despite its name, user interface (UI) applications are not included at this level. Instead, this level provides protocols for presenting usable data to these UI applications (such as email or web browsers) to communicate. HTTP, HTTPS, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP) are examples of protocols used at this level. Why is the network stack and its layers important? Network issues are a common occurrence. Sometimes the problem is minor. At other times, these problems can be complex. Having a place to start troubleshooting can help get to the bottom of things without wasting a lot of time. What better place to start than the OSI model? Given that the OSI stack sets the standard for how devices communicate over a network, having a clear understanding of each level is critical to figuring out where to start. Sometimes the first place to start with a network issue is the physical layer. Often physical components will fail and set off a string of undesirable events. Checking the cabling and hubs to ensure that everything is plugged in and working fine is a good first step. Routing issues are another common occurrence. Troubleshooting routing issues occur at the network level. At that level, checking each device’s IP address and the routing schemes is helpful. The troubleshooting process would continue in this manner evaluating the devices and services at each stack level to locate the problem. Overall, the OSI model is critical to ensuring seamless communication between all devices on a network. Not only that, it provides a solid framework to help locate problems when things go wrong.
<urn:uuid:b3c62344-11fb-4750-918a-52b36d6b0d99>
CC-MAIN-2022-40
https://news.networktigers.com/featured/laymans-guide-network-stack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00003.warc.gz
en
0.918117
1,241
4.28125
4
In a discovery spanning millions of years, scientists have found that humans, and most likely other animals, share important genetic mechanisms with a prehistoric Great Barrier Reef sea sponge. The University of Queensland’s Professor Bernie Degnan said some elements of the human genome – an organism’s complete set of DNA – functioned in the same way as the sponge. “Incredibly, these elements have been preserved across 700 million years of evolution,” Professor Degnan said. “This mechanism drives gene expression, which is key to species diversity across the animal kingdom. “It’s an important piece of a puzzle over many millions of years, and will feed into future research studies across the medical, technology and life sciences fields.” The significance of unravelling a mystery of this magnitude is not lost on former Degnan Lab researcher, Dr Emily Wong, now with the Victor Chang Cardiac Research Institute and UNSW Sydney. “This is a fundamental discovery in evolution and the understanding of genetic diseases, which we never imagined was possible,” Dr Wong said. “It was such a far-fetched idea to begin with, but we had nothing to lose so we went for it. “We collected sea sponge samples from the Great Barrier Reef at UQ’s Heron Island Research Station, before extracting DNA samples from the sea sponge and injecting them into a single cell from a zebrafish embryo. “Without harming the zebrafish, we then repeated the process at the Victor Chang Cardiac Research Institute with hundreds of embryos, inserting small DNA samples from humans and mice as well. “What we found is despite a lack of similarity between sponge and human DNA, we identified a similar set of genomic instructions that controls gene expression in both organisms – we were blown away by the results.” Scientists say the sections of DNA that are responsible for controlling gene expression are notoriously difficult to find, study and understand. Even though they make up a significant part of the human genome, researchers are only at the beginnings of understanding this genetic “dark matter”. Looking for a light in the pitch black “We are interested in an important class of these regions called ‘enhancers’,” Dr Wong said. “Trying to find these regions based on the genome sequence alone is like looking for a light switch in a pitch-black room. “And that’s why, up to this point, there has not been a single example of a DNA sequence enhancer that has been found to be conserved across the animal kingdom.” Dr Wong’s husband and paper co-senior author, Associate Professor Mathias Francois, from the Centenary Institute, said the work was incredibly exciting. “The team focused on an ancient gene that is important in our nervous system but which also gave rise to a gene critical in heart development, and the findings will also drive biomedical research and future healthcare benefits too,” Dr Francois said. “The more we know about how our genes are wired, the better we are able to develop new treatments for diseases.” Source: The University of Queensland • This article originally appeared at technology.org genetics, human genome, DNA, evolution
<urn:uuid:81c8e8fc-6aed-4f42-8ffe-7ab83c2cfb94>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/sea-sponge-unravels-700-million-year-old-mystery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00003.warc.gz
en
0.942808
688
3.40625
3
Modularity as a Requirement for ICT Infrastructure Engineering Systems What Makes it Different From Traditional Solutions? All data center subsystems can be divided into three main categories: power supply, air-conditioning, and IT load. Let’s include the telecommunication part with the latter to make things simpler. The modular approach means manufacturing these subsystems or their components as functionally complete products. It can be a factory-fitted cabinet or even a container intended for outdoor installation. The modular element can be defined as any sufficiently large prefabricated unit. However, it is usually either a container solution with some engineering infrastructure already installed at the factory or a solution that can be assembled into rack-mounted arrays that feature a chassis with a mounted busbar trunking system, a cable tray system, and other systems. This approach significantly simplifies the installation and commissioning of a data center. All the modules are tested at the production stage, which eliminates compatibility problems and various malfunctions that may slow down the commissioning process. Ideally, almost all of the engineering and rack-mount components of the server rooms are assembled at the same factory where a thorough check that simulates the load also takes place. Special heaters can be installed in cabinets instead of servers. After inspection, the system is disassembled for transportation, packaged, and delivered to the customer for further assembly onsite. Process parallelization speeds up commissioning even more. While the customer is dealing with the site and server room preparation, the data center is already being manufactured in the factory. The Reason Why Containers Haven’t Taken Over the World The use of prefabricated modules allows for a faster response to ever-changing needs. It is possible to increase the capacity or, conversely, to dismantle resources that are no longer required. Containerized data centers can be moved from one place to another if necessary, but they are not a universal solution despite all their advantages. The reason for this is rather trivial—maximum legal dimensions for loads transported on public highways are relatively small. If you make a server room out of one container, it will be rather difficult to place a row of racks with all communications in it. The server cell will turn out to be too constricted, and there will be very little space left for equipment maintenance. It is only possible to increase the dimensions to a specific limit because delivering oversized containers can turn out to be way more expensive than their manufacturing. Such solutions are best used for small and mobile data centers. They can be quickly deployed on the spot and just as quickly taken apart when there is no more need for them. Sometimes, containers are used for purposes such as the expansion of traditional data centers if, for example, there is a possibility of building another server room. Building large container data centers have remained an idealistic notion, but the development of modularity in modern data centers has taken a different path. For the most part, modern data centers are still built of concrete or prefabricated metal structures, and use sandwich panels. In this case, only the elements of engineering infrastructure, rack equipment, and even IT-loads are modular. The main idea is to develop a standard solution once that can be replicated later, increasing its capacity and performance as needed. Modularity in Power Supply There are ready-made modular designs for quick installation of rack-mounted equipment and all the necessary busbar trunking and cable tray systems. Convergent and hyperconvergent solutions have been developed to put the IT load on the modular rails. Modularity is the most difficult to introduce for air-conditioning or uninterruptible power supply systems, but there are possible options. Modularity is now available for UPS solutions and such products are gaining popularity. Their advantages are distinct: the system's power can be increased in relatively small increments by adding additional battery cabinets. Installation and maintenance are also simplified, and, in the event of an accident, only a part of the UPS, which can be replaced by the data center staff on their own, is damaged. ▲ Delta Modular UPS ▲ Delta Modular UPS CAPEX and OPEX Using a modular approach does not reduce capital expenses, but allows them to extend over the entire life cycle of a data center, which averages ten years. At the same time, operating expenses are reduced, and the object’s payback period is much shorter. It is possible to create a modular data center project that is divided into several phases, increasing its capacity as the demand for it grows. Scalability of modular solutions is excellent, and each subsequent phase of modernization most often does not affect the previous ones. This means complete independence already at the design phase, and that even deep modernization of the entire facility can be completed in phases. Today, the global data center construction industry is witnessing a trend towards the so-called hyperscale. Global technology companies are building big data centers that can handle dozens or even hundreds of megawatts of load. For the most part, these projects are implemented by search engines, cloud solution providers, and large social networks such as Microsoft, Amazon, Google, or Facebook. In Russia, as far as we know, Yandex is attempting this. Specific modular solutions are already actively used in the engineering infrastructure of such data centers. Analysts have different views on the growth prospects of the hyperscale segment; some of them forecast growth of up to 40% per year. Given the emerging trend of moving information systems to the cloud, even this assessment does not look overly optimistic. Modular solutions are also increasingly used when it comes to smaller projects. Today, this is a significant trend in the data center design and construction industry, which drives leading equipment manufacturers to regularly expand their lines of smart manufacturing solutions. How Do I Deploy the Best Back-up Power and Environment Monitoring & Management Systems in a Datacenter?
<urn:uuid:25bec5d6-40e5-4657-9b6b-ebc1e025acbf>
CC-MAIN-2022-40
https://www.deltapowersolutions.com/en-in/mcis/technical-article-modularity-as-a-requirement-for-ict-infrastructure-engineering-systems.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00003.warc.gz
en
0.937886
1,209
2.75
3
Next-generation 5G mobile communications technology could have a harmful impact on weather forecasting in the United States, based on expert testimony presented before a U.S. House committee during a hearing on the future of weather forecasting. Interference from 5G wireless phones could reduce the accuracy of weather forecasts by 30 percent, said Neil Jacobs, Acting Under Secretary of Commerce for Oceans and Atmosphere at NOAA. Jacobs made the remarks to members of the Environment Subcommittee of the House Committee on Science, Space, and Technology. The effect would be to return forecasting accuracy to 1980s levels, Jacobs added. Consumers and government agencies rely on accurate weather information, and it also can be important for disaster preparedness and recovery, noted the Aerospace Industries Association, an industry advocacy group, in a letter to the committee. “Interference-free radio frequency spectrum communications that allow for accurate readings make these applications possible,” the AIA maintained. “Unfortunately, today’s spectrum reality could directly impact the future of accurate weather readings,” the association continued. “Spectrum is a finite resource and as the Federal Communications Commission (FCC) looks to free up spectrum for emerging technologies like 5G, the risk of interference with existing users rises, in both the incumbent band and the adjacent bands.” The risks to weather forecasting came to light as the FCC prepared to auction off the 24 GHz spectrum, according to the letter. “While it was a multi-year process to get to the auction itself, it is unclear if the proper testing to ensure that harmful interference with weather equipment in the directly adjacent band would not take place had been conducted fully,” the organization asserted. Significant Security and Safety Implications Just days before the House hearing, two Democratic senators, Ron Wyden of Oregon and Maria Cantwell of Washington, sent a letter to FCC Chairman Ajit Pai requesting that he block any companies from operating in the 24 GHz band until weather forecasting operations were protected. “To continue down the path the FCC is currently on, to continue to ignore the serious alarms the scientific community is raising, could lead to dangerous impacts to American national security, to American industries, and to the American people,” the senators wrote. The FCC began auctioning spectrum in the 24 GHz band despite the objections of NASA, NOAA and members of the American Meteorological Society, Wyden and Cantwell pointed out. Those organizations asserted that the out-of-band emissions from broadband transmissions in the 24 GHz band would disrupt the ability to collect water vapor data measured in a neighboring band used by meteorologist to forecast the weather. “The national security and public safety implications of this self inflicted degradation in American weather prediction capabilities would be significant,” the senators wrote. They cited a U.S. Navy report released in March, which found the amount of interference to weather satellites permitted by future commercial broadband uses in the 24 GHz band at the FCC’s emission levels would result in increased risk in flight and navigation safety, as well as degrade battle space awareness. Technology Leadership Jeopardized The wireless industry has voiced strong opposition to any delays in auctioning the 24 GHz spectrum. “The Federal Communications Commission began looking at the 24 GHz band five years ago and put rules in place more than a year ago,” said Meredith Attwell Baker, CEO of the CTIA, which represents the U.S. wireless communications industry. “During that time the commission conducted an exhaustive review of all factors, consulted all relevant agencies, and balanced interests appropriately,” she continued. “An auction delay will not change the outcome of that review,” Baker maintained. “It will have the sole effect of risking America’s global technology leadership by slowing down the deployment of next-generation 5G networks.” The group also cast doubt on concerns about 5G technology interfering with weather forecasting. “This is an absurd and dangerous distraction that risks America’s 5G leadership in order to protect weather sensors that do not exist and that the government has no plans to launch,” CTIA Senior Vice President Nick Ludlum said. “The rules for the 24 GHz band were developed by the FCC in consultation with NASA, NOAA and many other federal agencies over the past five years,” he maintained. “Changing them now undermines President Trump’s 5G strategy while doing nothing at all to protect actual weather data.” Influenced by Money and Politics Despite the concerns raised by the scientific and military communities, it seems unlikely they will slow down the carriers’ implementation of 5G. “I don’t doubt that there may be some issues with the frequencies that close together, but I doubt that this will change the rollout of 5G,” said Phoenix-based Jim McGregor, principal analyst at Tirias Research, a high-tech research and advisory firm. “You have to realize that to get to 5G bandwidths, carriers are using a combination of technologies and frequency bands,” he told TechNewsWorld. “This may not even impact all of the carriers equally.” The FCC should suspend the 24 GHz band auction until it solves this problem, maintained Charles King, principal analyst at Pund-IT, a technology advisory firm in Hayward, California. “Despite the promotional yapping around 5G, sizable commercial availability is still some ways off,” he told TechNewsWorld. “The FCC could halt or delay the auction while it clarifies the issue but instead is going for a symbolic win, since the 24 GHz auction is estimated to bring in seven times more revenues than the previous 28 GHz spectrum auction,” King said. “We’ve seen previous auctions touted for political gain,” he added, “and I expect more than a little of that is infecting the FCC’s leadership and decision-making process.” Bad Data’s Ripple Effect Problems arising from spectrum reallocation are nothing new, noted Bill Menezes, senior principal analyst at Gartner, a research and advisory company based in Stamford, Connecticut. “Whenever spectrum gets reallocated, frequently there’s an existing use that may get affected, so the idea that this could happen is not unusual,” he told TechNewsWorld. In the 3.5 GHz band, for example, there was an issue with interference with the Navy’s existing use of the spectrum for radar, so safeguards were imposed to prevent interference with naval operations. In the 5.0 GHz band, protocols were put in place to keep cellular users from infringing on WiFi users’ operations. “When a frequency is adjacent to an incumbent’s, the FCC might set up a ‘guard band’ and say new users can’t stray into that band,” Menezes explained. As for the 24 GHz band, he noted that problems there could reach far beyond weather forecasting. “If you think of all the smart devices that are coming out for consumers and agriculture that rely on weather forecasting data,” Menezes said, “there’s a potential huge ripple effect if the effectiveness of that data is compromised.”
<urn:uuid:e70db112-e70a-4b29-ab05-038f3e6e668f>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/5g-could-mess-with-accuracy-of-weather-forecasts-86026.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00003.warc.gz
en
0.943656
1,517
2.5625
3
Finding missing children and unraveling the complex web of human trafficking is no easy task. The relevant datasets are massive and often unstandardized. It can be difficult to find the right data at all, as it often disappears from websites and pages on a regular basis. When data is hard enough for scientists to capture and evaluate, how can law enforcement agencies even begin to get a handle on it? These agencies, with little funding or know-how, need real help if they want to leverage big data and get a grip on human trafficking. Many efforts to solve crimes with data is actually coming from outside the law department. From community efforts to non-profits and even full business solutions, it seems the world of data science is actively using their skills for good. More importantly, these data solutions are in stark contrast to the more general and vague job of crime prediction, which is becoming more and more common. Many departments already use data to target trouble areas, but for those crimes that involve huge rings and layers of corruption, there’s a lot more work to be done. The companies using data science to stop human trafficking often use several methods and mimic what regular law enforcement agencies might do on their own. The “Science Against Slavery” Hackathon, was an all-day Hackathon aimed sharing ideas and creating science-based solutions to the problem of human trafficking. Data scientists, students and hackers honed in on data that district attorneys would otherwise never find. Many focused on automating processes so agencies could use the technology with little guidance. Some focused primarily on generating data that could lead to a conviction—which is much easier said than done. One effort from EPIK Project founder Tom Perez included creating fake listings. They could then gather information on respondents, including real world coordinates. Other plans compared photos mined from escort ads and sites to those from missing person reports. Web crawling could eventually lead to geocoding phone numbers or understanding the distribution of buyers and sellers, as well as social network analysis. Turning Big Data Into Real World Information Perhaps one of the more famous initiatives comes from the Polaris Project, a project that was started in 2002 and revitalized in 2012 through the use of data science. When the company heard a talk from the CEO of Palantir, a software and data analysis company, it was clear that the fight against human trafficking needed an upgrade—a big one. With some help from Palantir, Polaris was soon armed with new technology and engineers. They began leveraging data from phone calls, company contacts, legal service providers, and every other part of their organization in one simple platform. Palantir actually helped other companies, like the National Center for Missing and Exploited Children, or NCMEC, in a similar fashion. By combining data from public and private sources, the organization pinpointed 170 different quantitative and qualitative variables per case record. Advanced analytics were required to evaluate tips, of which 31,945 came by phone, 1,669 through online submission, and 787 from SMS. The project also aimed to digitize old records that spanned several decades and import them into a single searchable analyzable structure. All of this data is powerful, but the final step was making it easily accessible. By importing the numerous formats and levels of information into one database, what once took several weeks—or was impossible entirely—could be done in an instant. The story of one missing 17-year old girl in California has since become the shining example of data triumphing in the world of human trafficking. Using data science, analysts were able to find multiple online posts advertising the missing girl in question for sex. By analyzing over 50 ads, and nine different women spanning five states, analysts didn’t just find the girl—they saw the larger ring and were able to link the pimp to other crimes and victims. Visualizations and Easy Solutions for Law Enforcement The BBC has reported on the amount of data available, and how those terabytes aren’t as immediately helpful as the public would like to think. Child sex abuse raids tend to lead to unbelievable amounts of data. Image forensic specialist Johann Hoffman laments, “the problem is, how as a police officer do you go through that huge amount of data? When you are dealing with terabytes there’s no way a human could ever go through it all.” Using analytics, however, has given them an entirely new approach to data. Friendly data platforms and visualizations help generate a larger story that doesn’t require a master’s degree to understand. There are several more examples, but one particularly interesting area are those data solutions marketed toward law enforcement. One Y combinator startup wants to act as a paid service for law enforcement. It may feel a tad weird to read a tagline like “the right data at the right time can make or break your prosecution,” but these external companies offer the expertise law enforcement employees likely won’t otherwise have access to. Plus, to make the entire concept a bit more palatable, this particular startup, Rescue Forensics, only registers with official law enforcement agencies, as opposed to just anyone who wants to pay up. Most escort advertisements disappear after a few days, making it incredibly difficult to track. Companies like these who focus entirely on data tracking, analysis and storage can keep otherwise lost information alive for those who need it. The splintered nature of the entire field might also be one of its biggest assets, for the time being. While splintering in some sectors causes huge problems, and ultimately holds users back from progress, the array of approaches in this area is due to just how many people are interested in creating solutions. These different companies come with different backgrounds and goals and will ultimately lead to new and exciting possibilities. Many operate on open-source platforms, meaning we can expect the number of solutions to continue to skyrocket. Like this article? Subscribe to our weekly newsletter to never miss out!
<urn:uuid:1a5b2cb2-e066-45d2-a776-bd7ba5bd5850>
CC-MAIN-2022-40
https://dataconomy.com/2016/05/data-science-leveraged-stop-human-trafficking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00003.warc.gz
en
0.958693
1,221
2.9375
3
It’s been a while since we ran our challenge, How strong should your Master Password be?, in which we gave out prizes to the first people who could figure out the passwords in carefully constructed challenges. The challenges were designed to simulate the threat to a user who has had their 1Password data stolen from their own machines (1Password data captured from our servers are protected by your Secret Key and so aren’t subject to this sort of attack). After paying out a total of $30,720 USD, we have a better picture. The short answer is that it costs the password cracker about $6 USD for every 2³² (4.3 billion) guesses of a 1Password account password. An attacker, on average, only needs to try half of all the possible passwords, and had we not provided hints, it would have cost the attackers $4,300 USD to crack the three-word passwords in our challenge. This figure of $6 USD per 2³² guesses allows us to calculate the cracking costs for any known password strength. Given that passwords created by the 1Password password generator, have precisely known strengths – unlike human-created ones – we know a four-word password created by our generator would cost about $76 million USD to crack. A four-word password that uses one randomly capitalized word, and randomly chosen numbers as separators between the words, raises the cost to about $100 billion USD. There are more examples listed toward the end of this article. At the risk of tiresome repetition, let me repeat two important things: - This kind of guessing attack is only possible if the attacker obtains your encrypted data from your device. Thanks to the Secret Key, what is stored on our servers cannot be attacked this way. - The cracking cost is based on our use of 100,000 rounds of PBKDF2-H256 for processing account passwords. You shouldn’t assume passwords used elsewhere are protected the same way. Setting the prize (wrong) I initially underestimated both the amount of effort needed to crack the passwords and the amount of prize money needed to incentivize serious attempts. This underestimation resulted in the need to double the initial prize offering twice, and share a few hints. This was good news, as it means that 1Password account passwords are well protected, even on the users' own devices. Again, this kind of guessing attack isn’t possible for data captured from us, as your account password gets blended with your Secret Key by the 1Password app. My miscalculation did mean that the contest ran much longer than originally expected, and we ended up quadrupling the prizes. But this is excellent news. It means that good-enough account passwords are within human reach. The even better news is that the additional cost didn’t come from my salary! Perhaps some day I’ll go over exactly how I underestimated the cost of the project in a future, more technical blog post that covers the pricing of GPUs over the years, opportunity costs, and risk and uncertainty pricing. But don’t hold your breath considering that what you are reading now is long delayed. What you should do Our general advice about account password choice hasn’t changed, but I’m repeating it here because your account password (along with our slow hashing) is your only defense if your 1Password data is captured from your own device. Neither two-factor authentication (2FA) nor your Secret Key can protect you in that particular case. Your Secret Key will protect you if data is stolen from us, but if data is stolen from your own system, we have to assume the attacker gets the Secret Key with it. How you balance these four key points with your specific needs, habits, and use cases is something you’ll have to decide for yourself. 1. It must be used only as your 1Password account password In the small handful of cases where we learned that someone’s 1Password data was compromised, we discovered that the victim reused their account password for a less secure service, or had deliberately shared their credentials with someone only to regret it later. You may, however, opt to use the same account passwords for multiple 1Password accounts. 2. It should be the strongest that you can reliably and comfortably use You need to find the balance that works for you. Your account password needs to be something that you can reliably use several times a day on multiple devices. Keep in mind that the more you use it, the easier it will become to type and remember. Even if you set up biometric unlock, 1Password will occasionally prompt you for your account password to ensure you don’t forget it. 3. Randomly created passwords are much stronger than human-created ones I encourage you to use our password generator to create your account password. Even with the same requirements, human-created passwords are much easier for attackers to guess than randomly-created passwords. A human tasked with creating, say, a 10-character password with numbers and mixed-case letters is more likely to create a password like Iloveyou12 than they are to create Wa7RoWTC18. Both meet the technical requirements, but humans do not pick uniformly from the set of about 420 quadrillion passwords that meet those requirements. That is some of those 420 quadrillion passwords are more likely to be picked than others. A good password generator does pick uniformly, meaning that each of those 420 quadrillion ten-character passwords is as likely to be picked as any other. Attackers very much tune which guesses they try first based on their extensive knowledge of human password choice. There really is no comparison between generated passwords and human-created ones. Literally. We have no reliable way to determine how strong human-created passwords are, so we can’t make a proper comparison between human-created ones and those created by our Strong Password Generator.1 What we do know is that human-created passwords do get successfully cracked, while machine-generated ones do not. Although I will continue to preach the virtues of generated account passwords, your account password must be something you can reliably and comfortably use. 4. Have a backup Print a paper copy of your Emergency Kit, record your password on the paper, and store it in a safe place. This is especially important after you’ve created your account password or changed it. If you have a 1Password Families or 1Password Business membership, designated members of that account will be able to help you restore access to your data if you forget your account password or lose your Secret Key. If others in your family or team are relying on you to perform such a recovery, your Emergency Kit should be printed out and easily accessible in case of an emergency. Money vs. Time What we’ve learned through the cracking contest doesn’t change our advice, but it does allow us to put a price on cracking account passwords. I want to emphasize that we are not talking in terms of how long it would take an attacker to crack a password, but instead in terms of how much it would cost them in computing resources. What might take weeks for some attackers would take years for others. Because one attacker might dedicate two GPUs for 16 weeks working on a 40-bit password, while another might dedicate eight GPUs over four weeks, a better representation of the work an attacker has to do is to put it in terms of money. We designed the cracking contest to find out how much effort it would take (while there was still some time pressure for them to do it). Instead of saying “for a 40-bit password it is between four and 16 weeks depending on what hardware the attacker uses”, we say “for a 40-bit password, it takes about $770 USD of effort in capital costs and running costs”. Each additional bit doubles the cost, so if 40 bits takes $770 USD of effort, then 41 bits requires twice that, around $1,500 USD of effort; and 42 bits would double that again to about $3,000 USD. Our contest was also designed to be hard enough to attract experts. Experts have the tools, experience, and knowledge to crack things most efficiently. Some people new to password cracking vastly overestimated how much it would cost because they were looking at approaches that experts wouldn’t use. So, with our of $6 USD for 2³² guesses given the password hashing scheme we use (100,000 iterations PBKDF2-H256), I present the following table. Cracking cost for different generation schemes One of the very cool things about our password generator is that we can compute the strength of a generated password precisely from settings given to the generator. Unlike human-created passwords, we don’t have to look at the actual password and make estimates. If we combine the strength with our estimated cracking cost of $6 USD for every 2³² guesses, we can look at how different kinds of passwords from our generator would fare under the attack conditions from the contest. The column headed “generator settings” describes the instructions to our password generator, though not all of these options may be available to users in all 1Password clients. - Wordlist (labeled “word”) passwords. These are made up of words picked randomly from a list of about 18,000 English language words less than nine characters long. These can have a constant separator between words, randomly chosen digits, or randomly chosen digits and symbols. One word may be randomly chosen to be made uppercase. - Default Smart password. These are like the wordlist passwords, but instead of English words they use groups of three letters. One of the five groups is capitalized, and the groups are separated by digits and symbols. There are about 9650 possible groups. - characters (labeled “char”) that are made up of things like letters and digits. These may be lowercase only, requiring uppercase letters, or requiring digits. Note, as always, that human created passwords will be far weaker than those created by our password generator. What we list here are the strengths of generated passwords. |Generator settings||Bits||Cost (USD)||Example| |3 word, constant separator||42.45||4,200| |8 char, uppercase, lowercase, digits||45.62||38,000| |3 word, digit separator||48.06||200,000| |9 char, uppercase, lowercase, digits||51.51||2,200,000| |4 word, constant separator||56.60||76,000,000| |10 char, uppercase, lowercase, digits||57.37||130,000,000| |4 word, constant separator, capitalize one||58.60||310,000,000| |4 word, digit separator, capitalize one||67.02||100 billion| |12 char, uppercase, lowercase||67.02||100 billion| |5 word, constant separator||70.75||1.4 trillion| |5 words, constant separator, capitalize one||73.07||6.9 trillion| |Smart password||84.20||16 quadrillion| Keep in mind that the costs are in terms of dedicated effort to break your password. A cost of $4,200 USD (a three-word generated password with a constant separator)2 may be a sufficient deterrent even if you have much more than that in value in your data. This is because an attacker may have more attractive opportunities for the same amount of effort. But if you think you’re likely to be specifically targeted, then $4,200 USD may not be enough for your needs. Changing to three words with digit separators ($230,000 USD) or four words ($76 million USD with constant separator, $26 billion USD with digit separator) is going to mean that an attacker is going to either give up or find cheaper ways (such as compromising your devices) than trying to crack your account password. Maybe the wordlist-based passwords aren’t your thing. If the added length of them isn’t worth the improved memorability, then consider character password generated passwords. You can get the same strength with much shorter passwords as long as these are generated in a truly random fashion. One thing I’ve learned since we introduced the wordlist passwords is that some people love them and some people hate them. The first place winners identified themselves as they are known in the password cracking community as s3inlc, winxp5421, blazer, and hops. They expanded their team when they went after the second- and third-place prize. I, along with some colleagues, had the pleasure of meeting many of them at PasswordsCon in November 2018. Indeed, they used some of their winnings to make the trip to PasswordsCon. We also offer a CSV file containing bit strength for various password generation settings with our password generator. For guidance on what the column headers mean, see the R Markdown source in the GitHub repository. Please join the discussion on our forum. There is a great deal more to say about this than can fit into this long-delayed blog post. If you want to feel like you were there as this progressed, it would be best to read the discussions as they were happening on our forum, but I will give an abbreviated timeline here: - April 23, 2018 - Contest first announced in How strong should your Master Password be?, - April 23, 2018 - Published contest resources on GitHub including the source code for how the challenges would be generated along with samples that people could test. - April 27 - Bugcrowd enrollment opened up. - May 2: - Challenges generated. PGP signatures (signatures only) of challenges and solutions published. - May 3 - Challenges published: The race begins. - May 10–16 - It became clear that the participants we were hearing from were only managing about 250,000 guesses per second, and so the contest as originally stated was too hard for the prizes offered. Internally, we decided that if there were no winners by mid-June we would double the prizes. - June 16 - We doubled the prizes. - July 2 - Opened discussion on offerings hints. - July 26 - Redoubled prize offerings. Committed to giving away more than $30,000 USD. - August 5 - Published hint creation scheme. - August 23 - First hints go live. - Late August – mid September - By this time, we already had a fair sense of cracking costs. Public discussion of incentives help us understand that our incentives, even with the first hint, were too low. - September 25 - Second hint published. Cracking is now four times easier than the original challenge and the prizes are four times the initial offering. - October 14, 6:10 UTC - First winning solution. - October 22: - First data-driven estimate of cracking costs of approximately $6 USD for 2³² guesses. (Subsequent data from later wins merely increased confidence in this estimate.) - November 7 - Second win. - November 11 - Third win. - Mid November - Team that won first three prizes volunteers to leave fourth prize to other competitors. - January 14, 2019 - Final winners. These winners had a different setup than the other winners, but the report of their work was consistent with our earlier cost estimate. - February 2019 - I start working on this blog post. By May 2019, blog post is “90% done”. - This blog post is done. The astute reader may have noticed that I’ve just dumped on our password strength meter. The truth of the matter is that while there is no reliable way to guess at the strength of a human created password, some ways of estimating strength are better than others, and even if unreliable, these are useful guides that help people pick better passwords. ↩︎ When talking about the contest challenge, I said that the cost was $4,300 USD, and now I say that it’s $4,200 USD. This is because our wordlist has shed a few words over the past few years, and so a three-word password generated in 2018 is a fraction of a bit stronger than one generated today. We have more than made up for this by enabling randomly-chosen digits and symbol separators between words and for one random word to be capitalized. ↩︎
<urn:uuid:8f6cdc1a-6fc8-4fde-9f8a-353b31b63b27>
CC-MAIN-2022-40
https://blog.1password.com/cracking-challenge-update/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00003.warc.gz
en
0.952519
3,492
2.9375
3
Over the years, the involvement of cyber activity in day-to-day human activities has increased dramatically. One can now shop online and make bill payments without even having to visit financial institutions in person. As a result, the number of cyber-attacks has also increased over the years, with newer schemes and more complicated methods. Facebook accounts, with their large volume of data on a user, have been known to be one of the frequent targets of hacking incidents. Older hacking methods have been proven to work time and time again, but newer and more dangerous forms of hacking have been known to exist today. Here are some of the common Facebook hacking methods and some tips on how to avoid them. Phishing attacks have been widely attacking businesses and private individuals over the years. A hacker would send an email to the victim, which would trick them into clicking links that would download a computer virus or malware into the victim’s device. The virus or the malware will send information regarding an individual’s information, which, in this case, is the victim’s Facebook user information. More advanced phishing attempts have been using company names and letterheads for their attacks, giving them more credibility in their victims’ eyes. This is one of the most common ways accounts get hacked. Tips to avoid: In order to make sure that phishing attacks don’t work, it would be recommended to take a closer look at the email being sent, noting that a trusted entity would never ask a customer to reveal private information about one’s account in an email. Moreover, looking at the links provided in the email more closely would help. Instead of clicking on the link directly, it would be ideal for copying the link and then opening it on your browser. This helps ensure that the malware’s algorithm imprinted on the link would be cut off from one’s applications, giving it no access to one’s information. This one is an old yet highly effective method. Keyloggers record every keyboard movement from the moment it is turned on. When an individual uses another device to log in to their Facebook account, their login details would be recorded into the key log. Tips to avoid: When using another computer, it would be advisable to open the task manager first to make sure that no keylogger apps are open at the time of logging in. While it is not advisable to use another person’s phone, it would be advisable to use multi-factor authentication to add another layer of protection to the account. This would help make sure that the password will be rendered useless in case it is stolen. 3. Log in token interception Back in 2018, a widespread account hack on Facebook occurred. The hackers could intercept the login tokens of several users, which gave them access to the accounts. The scary part about this is that they do not even need passwords to log into their accounts if they have the tokens. This is potentially the most dangerous hack yet, which may not be prevented by normal methods. Tips to avoid: Always add layers of protection. If log-in tokens are being stolen, it would be advised to use multi-factor authentication to ensure that one is notified in case of a breach. It is also advised to regularly change one’s password to ensure that the stolen tokens are rendered invalid over time. Changing passwords would reset the login tokens required to log into the account. Also, checking one’s login history would be a good security measure. If suspicious logins are being initiated in an account, changing security details would help restrict access against the hackers. Maintaining data security is more important now than ever. The amount of data theft over the years has been massively occurring throughout the internet, which is why one needs to maintain a habit of being aware and conscious of one’s data privacy. With the widespread use of Facebook, it would not be surprising if people take an interest in knowing what is going on in a private account. Countering these attacks would depend on the user. Knowing when and how these may happen is now an essential part of living, as we start living with computers. For help recovering your hacked Facebook account, visit Agency.
<urn:uuid:14a92fc5-07eb-41cc-8679-dcdaf4c2e733>
CC-MAIN-2022-40
https://blog.getagency.com/personal-cybersecurity/hack-facebook-account/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00003.warc.gz
en
0.955885
879
3.234375
3
The 12-day UN Climate Change Conference of the Parties (COP) took place in Glasgow earlier this month, after a yearlong delay caused by the COVID-19 pandemic. For over 25 years, the UN has been bringing together world leaders and other interested parties to discuss global climate matters. This year’s summit was the 26th annual meeting, thus the name COP26. The COP26 agenda was focused on securing a recommitment to net zero emissions by 2050 with the expectation of significant progress in terms of reductions over the next decade. In this Flash Report, we summarize several developments that took place during the first week of the summit.
<urn:uuid:60d953e2-b2dc-427d-ad01-0eff36700c06>
CC-MAIN-2022-40
https://www.knowledgeleader.com/publications/developments-and-during-cop26-first-week
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00003.warc.gz
en
0.964058
134
2.671875
3
Purdue University innovators have unveiled technology that is 100 times more resilient to electromagnetic and power attacks, to stop side-channel attacks against IoT devices. Securing IoT devices against side-channel attacks Security of embedded devices is essential in today’s internet-connected world. Security is typically guaranteed mathematically using a small secret key to encrypt the private messages. When these computationally secure encryption algorithms are implemented on a physical hardware, they leak critical side-channel information in the form of power consumption or electromagnetic radiation. Now, Purdue University innovators have developed technology to kill the problem at the source itself – tackling physical-layer vulnerabilities with physical-layer solutions. Recent attacks have shown that such side-channel attacks can happen in just a few minutes from a short distance away. Recently, these attacks were used in the counterfeiting of e-cigarette batteries by stealing the secret encryption keys from authentic batteries to gain market share. “This leakage is inevitable as it is created due to the accelerating and decelerating electrons, which are at the core of today’s digital circuits performing the encryption operations,” said Debayan Das, a Ph.D. student in Purdue’s College of Engineering. “Such attacks are becoming a significant threat to resource-constrained edge devices that use symmetric key encryption with a relatively static secret key like smart cards. Our technology has been shown to be 100 times more resilient to these attacks against Internet of Things devices than current solutions.” The team developed technology to use mixed-signal circuits to embed the crypto core within a signature attenuation hardware with lower-level metal routing, such that the critical signature is suppressed even before it reaches the higher-level metal layers and the supply pin. Das said this drastically reduces electromagnetic and power information leakage. “Our technique basically makes an attack impractical in many situations,” Das said. “Our protection mechanism is generic enough that it can be applied to any cryptographic engine to improve side-channel security.”
<urn:uuid:1e7d928e-2189-4310-808f-dae8bf6b0276>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/02/26/stop-side-channel-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00003.warc.gz
en
0.922449
415
3.15625
3
Last time when we examined Bluetooth, readers will recall that it connects devices using radio waves across relatively short distances, typically up to 10 meters. But while Bluetooth is a hoary technology — at least by the standards of Cooper’s Law — Near Field Communication (NFC) is a newer form of device-to-device (D2D) communication and one that holds great potential for e-commerce and other applications. NFC vs Bluetooth What is Bluetooth? What is NFC? As defined previously, “Bluetooth technology is a standardized, secure protocol for transmitting and receiving data via a 2.4 GHz wireless link.” NFC, on the other hand, is what is known as a “proximity card” or “contactless card” technology, which has a read range extending only to 50 cm (15 in) — much less than Bluetooth range. NFC encompasses a range of standards for personal communication devices that link to each other via radio frequency (RF). The NFC protocols are based on existing radio frequency identification (RFID) standards — more on that below — including ISO/IEC 14443 and FeliCa (commonly found in Japan). Other standards include ISO/IEC 18092 and others created by the NFC Forum, a consortium established by Nokia, Philips and Sony which now includes over 160 members. NFC operates in the unregulated RF band of 13.56 MHz, which indicates its limited data transfer capacity. N-Mark Logo for NFC-enabled Devices Image Source: NFC TAG LTD on Wikimedia According to nfc.org, there are three kinds of NFC technology: Type A, Type B and FeliCa. The difference between the three is defined by the way they communicate with other devices. Moreover, there are “active” and “passive” NFC devices. A SmartPhone would be considered an active device while a NFC (RFID) tag embedded in a packing label would be passive. As one might expect, active devices send, receive and read data; passive devices merely contain data and transmit when authorized by an active device. Superficially, Bluetooth, Wi-Fi and NFC appear to be similar since all three are forms of wireless communication and exchange data between enabled devices. What separates NFC from the others is that it uses electromagnetic (EM) radio fields whilst the others function utilizing radio transmissions. Prior to the development of Bluetooth Low Energy (BLE), NFC used less power than the legacy Bluetooth standard but from a power architecture standpoint, both BLE and NFC are remarkably energy efficient. And instead of pairing devices manually, as is the case with Bluetooth, merely placing NFC-enabled devices next to each other establishes a peer-to-peer (P2P) network in less than a second. Once the P2P network is configured, Bluetooth or Wi-Fi can be used to send and receive larger amounts of data or if longer range communications are needed. This handover is seamless and occurs without severing the data link. NFC also turns necessity into a virtue by its reliance on close proximity. The short span between NFC devices minimizes the interference commonly found with Bluetooth devices, especially when several enabled devices are close by. Radio Frequency Identification (RFID) and NFC NFC evolved from RFID, a technology in existence for a half century and deployed across the globe. Every RFID tag has a memory chip that stores data and an antenna. It’s a ubiquitous tool widely used for inventory control and package tracking. Ever take your pet to the veterinarian to get “micro-chipped”? What about the “EZ Pass” sticker on your car’s windshield so you can enter the tollway without stopping at a tollbooth? Those are but two applications of RFID technology. An NFC-enabled Austrian Federal Railways ticket stamping machine Image Source: Sae1962 on Wikipedia While RFID works fine from a distance of several feet, as can be implied from the examples above, NFC has a much smaller range of only a few inches. Too, unlike RFID, it can be used for two-way communication and has better security features. Combined with host card emulation, NFC can transform your SmartPhone into a digital wallet. Welcome to the world of contactless payment where one tap of your phone can buy groceries, redeem electronic coupons and amass merchant loyalty credits. This short YouTube video from Practical NFC shows “How NFC Near Field Communication Works.” While similarities between NFC and Bluetooth abound — both are wireless, both need specialized hardware and both share data between devices — they do differ in a number of ways, some of which were discussed above. Another dichotomy is found in the nature of the connection between linked devices. NFC connections are characteristically of short duration while Bluetooth “provides a more stable connection for a longer duration.” So is Bluetooth better than NFC? A few years ago, Apple seemed to think so but they’ve now joined with Samsung and Google in adopting the NFC mobile payment standard. Mobile payment is certainly a big deal for this reason: EuroPay MasterCard Visa (EMV) Point of Sale (POS) terminals have been deployed in hundreds of thousands of retail locations because card processing fees will increase if merchants don’t have these POS terminals available to consumers by 2016. To use the 20th century Betamax v. VHS metaphor for supremacy in mobile payments, NFC (VHS) has apparently won. As hubpages.com blogger Mrinal Saha writes, “NFC is not better than Bluetooth; neither is Bluetooth better than NFC. Both are quite different technologies with somewhat similar functions. Both have their advantages and disadvantages depending upon the implementation. For example, to share a song, I prefer Bluetooth whereas for a small link, I go for NFC.” Impeding the adoption of one mobile payment standard everywhere is the world’s largest retailer. Walmart Pay, although it may sound like Google Pay, Apple Pay or Samsung Pay, is not compatible with these NFC-type platforms. It uses an app that scans a Quick Response (QR) code at the register when the shopper pays for the purchase.
<urn:uuid:4a7b90f5-2f8d-4a63-a2d7-c877aee88839>
CC-MAIN-2022-40
https://internet-access-guide.com/up-close-and-personal-near-field-communication-nfc-and-bluetooth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00203.warc.gz
en
0.928765
1,279
3.3125
3
Some of the biggest social media sites, like Facebook, Instagram, YouTube, Snapchat, WeChat, and TikTok, could be forced to give the government information about posts and how many people saw them. This would help the government decide whether or not to tighten laws about spreading false information. Social media companies use this kind of information to target ads, which has helped them become some of the most valuable companies in the world. However, they are cautious to keep this information safe. Michelle Rowland Remarks On Media Misinformation Image Credits: SMH.Com.AU | OSCAR COLMAN Michelle Rowland, Australian minister of communications, admits she has no idea what the data will show but is concerned that media regulators will not be able to adequately address the issue of misinformation if they cannot see it. Even though there is a code of conduct for the industry that is supposed to show how the industry is working to stop spreading false information, the Australian Communications and Media Authority does not have the power to demand the data. In a wide-ranging interview, Rowland discussed how she plans to handle the technological platforms that absorb nearly two hours of Australians’ time daily, according to industry estimates. “However, I also think that it is such an important topic that regulators need to be able to get information to share with governments.” Social Media Giant’s Partial Information And Transparency Reports Disinformation spreading through social media campaigns has been blamed for influencing elections, such as when Donald Trump won back in 2016. However, this label is often met with fierce disagreement when applied to specific posts. Commonly, platforms will only reveal a subset of information in their transparency reports, such as the number of posts removed for spreading false claims about COVID-19 but not the claims themselves, the number of posts that remained online, or the identities of those who disseminated them. A new Labor government was elected only four weeks ago, and Rowland makes it clear that they keep an open mind about the data and how they will respond to it. They are not committing to the previous government’s plans to make the industry codes potentially enforceable. She also refuses to reveal whether or not the media authority plans to release the information it collects. The parent company of Facebook, Instagram, and WhatsApp, Meta, came under fire at the end of last year after a whistleblower leaked internal research suggesting that using these platforms negatively impacted teenage girls’ mental health. It refuted critical descriptions of the study and eventually released some of the findings to the public. Rowland: “We just found out that Instagram is doing research on body image issues and how that affects teens.” That’s fine with me. Rowland argues that some of the sector’s criticisms could be mitigated if companies were more open to sharing data publicly. Contrarily, she cautiously approaches the prospect of gaining access to the algorithms that control online content. I don’t think any government has figured this out yet, Rowland said. Some of the biggest social media sites could be forced to give the government information about posts and how many people see them. This would help the government decide whether or not to tighten laws about spreading false information. The Australian Communications and Media Authority has no power to demand the data. A new Labor-led government is keeping an open mind about how it will respond to COVID-19’s industry code of conduct.
<urn:uuid:73ee07e3-1289-4160-9ed8-5546f72de8c2>
CC-MAIN-2022-40
https://www.infostor.com/technology/michelle-rowland-talks-about-social-media-misinformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00203.warc.gz
en
0.950628
715
2.53125
3
Beyond Passwords: A Better Way to Verify Users There's just a few known problems with passwords: - Easily phished & cracked with brute-force attempts (with the help of password-guessing tools), or stolen, if stored in plaintext or hashed - Years of password dumps have created a handy dictionary from which attackers can use to attempt remote logins - Puts security burden on users (hard to remember, make unique, know what the latest password security tips are - these are NIST’s recommendations) - Proliferation of web apps requires a lot of passwords, making it hard to keep up with - As a result, password reuse is very high, allowing an attacker to log into several accounts if they're able to breach just one - The user experience is disrupted for the sake of password re-entry or account lockouts due to password failures And of course, there’s that infamous Verizon Data Breach Investigations Report (DBIR) that we love to quote: Eighty-one percent of hacking-related breaches leveraged stolen and/or weak passwords. As a result, the concept of passwordless authentication has popped up in the industry as a solution. These may rely on methods like biometrics (your fingerprint via a device like your smartphone), or unique passcodes sent via email or text message that you enter into a login form to verify your identity. Other forms of additional authentication have been implemented to help address password-related risks - like the use of knowledge-based authentication (KBA). Commonly used by the financial industry, answering KBAs (with the help of zip code, birthdate or Social Security Numbers) can reset your account password, effectively allowing anyone into your accounts if they guess correctly or source the answers elsewhere on social media or in public records. Modern, Secure & Easy Multi-Factor Authentication (MFA) To move beyond passwords, modern and advanced multi-factor authentication (MFA) can provide not only more secure methods, but additional insight, policies and controls that work to strengthen your access security. First, the MFA (also known as two-factor authentication) methods: U2F: Security Tokens Universal 2nd Factor is an open authentication standard developed by Google and managed by the FIDO (Fast IDentity Online) Alliance. A far cry from security tokens that generate codes users must type in, a U2F token comes in the form of a small USB device plugged into your laptop. Once configured, you only need a supported web browser to log in - tap the device once to complete two-factor authentication and log in securely. See U2F in action, with Duo: In addition to usability advancements, a U2F device protects private keys with a tamper-proof component known as a secure element (SE), helping to mitigate phishing attempts. The Yubikey Neo provided by Yubico is an example of a U2F device, supported by Duo's 2FA. Here's a video to help you understand how U2F works: Duo Push: Mobile App-Enabled Using a mobile authentication app on your smartphone, you can also easily approve push notifications to complete 2FA and log in securely. See it in action: This method is more resilient to man-in-the-middle (MiTM) attacks, and more secure option than SMS-based 2FA that could be phished more easily. It’s also fast, convenient, and doesn’t require carrying around a second device. A New Way to Authenticate: WebAuthn Another new standard known as Web Authentication, or WebAuthn, allows users to easily register authenticators (hardware security keys or Trusted Platform Module devices) with popular web browsers. This standard is supported by Google, Microsoft and Mozilla. This type of authentication would allow a user to replace traditional passwords by authenticating with their device - if they provide an authentication provider (like Duo) with biometric verification. This basically combines the factors something you are (biometrics) with something you have (a hardware security token) - eliminating the need for something you know (inherently less secure passwords). However, this is still being developed and refined, and is unlikely to completely replace passwords anytime soon, as Nick Steele states in Web Authentication: What It Is and What It Means for Passwords. But it does provide hope for more secure and user-friendly authentication options in the future. Check out Duo’s Two-Factor Authentication Evaluation Guide to learn about different two-factor authentication vendors and solutions, and learn more about What is Modern Two-Factor Authentication (2FA)?
<urn:uuid:645c908b-2063-48d3-aed6-207fdf3c15c3>
CC-MAIN-2022-40
https://duo.com/blog/beyond-passwords-a-better-way-to-verify-users
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00203.warc.gz
en
0.917563
968
2.609375
3
A New Jersey high school in September transformed a 20,000-square-foot auditorium known as “the pit” into the Marauder Innovation Learning Lab (MiLL), a $3 million STEAM-focused extension of Mount Olive High School. The school has always set itself apart as an emerging technology-focused school. In 2013, a local nonprofit donated a MakerBot Replicator 2X, an experimental 3-D printer, and the school later acquired a couple of extra printers. “Our goal with the MakerBot Innovation Center is to provide students a learning environment that replicates what industry is like,” said design teacher Megan Boyd in a MakerBot blog post. “We’ve been talking to many leaders at the college and industry level to better understand what skills students will need to succeed. We heard over and over again that in our rapidly evolving economy, skills like problem-solving and collaboration will be much more important for students than purely technical skills.” Following demand from students and educators, Mount Olive High School decided to upgrade its 3-D printing center in the spring of 2016. With financing from the local board of education, and the Department of Defense, the school worked with MakerBot to install a MakerBot Innovation Center, the first such center in a secondary school worldwide. The MiLL revamp allowed the school to transform an area that had been previously off-limits to students for more than a decade due to design flaws and safety concerns into a useful, educational area. The space houses a ThinkerSpace, where students can meet and discuss projects, MakerBot Innovation Center, which houses the 3-D printers, and a Workshop area with workbenches and tools where students can take their prototypes to the next level. The goal is to bring together different faculties to help students learn how to approach problems in a holistic manner. The main courses offered in the MiLL are engineering and industrial design. “While our engineering courses are focused on the more technical aspects of prototyping, such as assembly design, our industrial design classes very much focus on product design, aesthetics, and user experience,” said David Bodmer, a teacher at Mount Olive, in a MakerBot blog post. “Combining the arts with more traditional STEM learning is really where the magic happens.” With 33 printers, the school can now accommodate an entire class’ printing needs at once. “With access to 33 MakerBot 3D Printers, we can print entire class loads at once without having to individually load prints onto a flash drive and cue them for printing,” Boyd said. This increase in printing capabilities speeds up the design and learning process for students. Rather than having to wait in a lengthy queue to print their work, students can print their prototypes quickly. Students can then get feedback the same day, rather than weeks later. “When you can quickly make changes and evolve your idea, it’s easier to take criticism from others,” said Bodmer. “We consider that part of the core skill set that students need to succeed. Students need to learn to be flexible in their thinking and be receptive to feedback to refine and develop their ideas. We don’t know what these students will end up doing when they enter the job market but these are the type of skills that will benefit them in any career path.” According to MakerBot, from September to November, students were able to print more than 700 objects. MakerBot also explained that plans are in the works for a STEAM (science, technology, engineering, arts, and math) Capstone course for the 2017 school year. The school hopes to partner with local companies and nonprofits that will involve groups of students in their existing projects and then evaluate the students’ work.
<urn:uuid:93360326-4c46-485e-8b99-b06fb7ce1208>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/new-jersey-high-school-debuts-33-3-d-printers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00203.warc.gz
en
0.96344
783
2.640625
3
Query profiling is a vital process for any SQL database. It helps in understanding the performance of queries and tuning them accordingly. Businesses are increasingly moving towards using SQL servers in order to manage their data. This is because of the benefits it provides such as data security, better performance and scalability. With this, companies can get insights into what queries are costing them most and make adjustments to improve their business. What is SQL? SQL (structured query language) is a programming language that is used to manage data in relational databases. It has a query language that allows you to retrieve, insert, update, and delete data from the database. SQL is most popularly used for retrieving data from the database and then manipulating it with the use of other programming languages. It can also be used to create new databases and tables, as well as run queries against existing ones. In addition to its use in relational databases, SQL can also be used to manage non-relational databases like XML databases and NoSQL databases. What is query profiling in SQL? Query profiling is a technique that can be used to identify the most efficient SQL queries for your database. It helps you improve performance and reduce costs by analyzing which queries are running and how long they take to run. This means evaluating how much time it takes for your SQL query to run, how many times it has been executed, and what its cost was in terms of resources consumed. Query profiling also helps in identifying the most common patterns of queries, which can be used to optimize performance. For example, it can help identify bottlenecks in your SQL server and fix them before they become a problem. It is also helpful for users to understand what they are doing wrong with their queries. The most common form of query profiling is execution plan analysis, which is the process of examining an execution plan to determine the best way to execute a given query or set of queries. Common forms of query profiling in SQL Server: Query profiling optimization Profiling optimization is a process that helps in identifying the slow queries and making them faster. It is also helpful in learning about how SQL queries are being used. This can help in finding out what the best practices are for your data and how to optimize your system. The benefits of SQL query profiling optimization include: Using query profiling The importance of query profiling lies in its ability to identify and fix inefficient queries, which can improve user experience as well as increase productivity. This is done by first identifying all the queries that are running on the database, then looking for patterns and finally categorizing them based on their frequency. The process of query profiling can be broken down into three steps: The most common types of inefficient queries are those that have a lot of data-intensive operations involved in them. These types of queries are usually found to be time-consuming as well as resource intensive. In order to fix these inefficient queries, you need to figure out what causes them and then make changes accordingly. Talk to the SQL experts Query profiling helps identify the most expensive operations in a database, which can be optimized to improve performance. It helps businesses know what queries are costing them most so they can make adjustments accordingly. With the increasing amount of data being stored in SQL compliant databases, it becomes difficult for organizations to identify what exactly is stored in their database and more importantly, what is not. The SQL database consultants at Everconnect will create SQL solutions designed specifically for your server environment needs. Their SQL specialists guarantee they will improve your worst query by 50% in three days or less – and if not, you won’t be charged. Talk to them today and find out more.
<urn:uuid:2bf736d3-231f-42d8-89e3-7aee7696f232>
CC-MAIN-2022-40
https://everconnectds.com/blog/query-profiling-in-sql-is-it-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00203.warc.gz
en
0.958884
754
2.953125
3
Cisco CCNA Discontiguous Addressing If you create VLSM network, sometimes you may find that the backbone connecting buildings together is a different class of network. This is called discontiguous addressing. By default routing protocols will not work across discontiguous networks. By using the “no auto-summary” command on the network boundaries, routing protocols will be able do work across a discontiguous addressed network. Cisco CCNA VLSM Question Network 1: 192.168.10.0/26 – Equates to 192.168.10.0 255.255.255.192, block size of 64 Network 2: 192.168.10.64/27 – Equates to 192.168.10.64 255.255.255.224, block size of 32 Network 3: 192.168.10.96/28 – Equates to 192.168.10.96 255.255.255.248, block size of 16 Serial link 1: 192.168.10.112/30 – Equates to 192.168.10.112 255.255.255.252, block size of 4 Serial link 2: 192.168.10.116/30 – Equates to 192.168.10.116 255.255.255.252, block size of 4 Cisco CCNA What is Route Summarization? Route summarization forms a supernet. This is an IP network that is formed from the combination of two or more networks (or subnets) with a common Classless Inter-Domain Routing (CIDR) routing prefix. In other words, it is a block of contiguous subnetworks that are addressed as a single subnet. An example is that subnets 192.168.1.0/24 through 192.168.3.0/24 can be summarized as 192.168.1.0/22. Be careful not to over summarize as it can cause black holes in routing. Cisco CCNA Route Summarization Summarization is not always possible because unless you design and implement the network with summarization in mind, you will not find contiguous boundaries in which to implement summarization. Route summarization can significantly reduce the size of routing tables if implemented correctly. If not implemented correctly it can cause problems such as black holes if the routes are over summarized. Cisco CCNA Implementing Summarization Summarization is easy if you just know your block sizes…. Cisco CCNA To finsd a Summary Address… Since this is a block size of 32, the mask would be 255.255.224.0 Know your powers of 2 and this goes a lot faster. Cisco CCNA Route Summarization Examples Answers to examples: Cisco CCNA Summary Question The subnet is 126.96.36.199 and the mask is 255.255.240.0 This is a block of 16 in the third octet The router will forward packets with a destination of the following addresses: 192.168.144.0 (Network address, not forwarded) 192.168.144.1 (beginning of host range) 192.168.159.254 (end of host range) 192.168.159.255 (broadcast address, forwarded) Cisco CCNA Summary
<urn:uuid:e3ae926c-10d2-4461-bccb-bac696961bd6>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/cisco-ccna-640-802-exam-certification-guide/cisco-ccna-vlsm-and-summarization-part-ii/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00403.warc.gz
en
0.83231
764
2.671875
3
At times, your mouse might not work due to various reasons. This article explains how to use your laptop's touchpad as an option in the UX client. Enable the scrolling function on your touchpad. You can check with your IT department to know how to enable the touchpad scrolling of your laptop. There are different ways to enable them depending on the Operating System and the brand of the laptop you are using. - To scroll: Place two fingers on the touchpad, and slide horizontally or vertically. - To zoom in or out: Place two fingers on the touchpad, and pinch in or stretch out. - To show more commands (similar to right-clicking): Tap the touchpad with two fingers, or press on the lower-right corner. - To drag the windows: Double-tap and drag the menu bar (top of the app window). - To view all the open windows: Place three fingers on the touchpad and swipe them away from you. - Show Task view: If you are viewing all the open windows (from the step above), swipe up again with three fingers. - To show the desktop: Place three fingers on the touchpad and swipe them towards yourself. - To switch between the open windows: Place three fingers on the touchpad and swipe right or left. - To launch the Cortana/Action Center: Tap the touchpad with three fingers.
<urn:uuid:1df269c0-6e96-42eb-b12b-4a577f6ab974>
CC-MAIN-2022-40
https://support.pivotal.aurea.com/hc/en-us/articles/4409146229650-Using-Touchpad-in-the-UX-Client
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00403.warc.gz
en
0.834953
297
2.546875
3
Dynamic routing terminology Dynamic routing is a complex subject. There are many routers on different networks and all can be configured differently. It become even more complicated when you add to this each routing protocol having slightly different names for similar features, and many configurable features for each protocol. To better understand dynamic routing, here are some explanations of common dynamic routing terms. - Aggregated routes and addresses - Autonomous system (AS) - Area border router (ABR) - Neighbor routers - Route maps - Access lists - Bi-directional forwarding detection (BFD) For more details on a term as it applies to a dynamic routing protocol, see one of Border Gateway Protocol (BGP) on page 338, Routing Information Protocol (RIP) on page 300, or Open Shortest Path First (OSPF) on page 377. Aggregated routes and addresses Just as an aggregate interface combines multiple interfaces into one virtual interface, an aggregate route combines multiple routes into one. This reduces the amount of space those routes require in the routing tables of the routers along that route. The trade-off is a small amount of processing to aggregate and de-aggregate the routes at either end. The benefit of this method is that you can combine many addresses into one, potentially reducing the routing table size immensely. The weakness of this method is if there are holes in the address range you are aggregating you need to decide if its better to break it into multiple ranges, or accept the possibility of failed routes to the missing addresses. For information on aggregated routes in BGP, see Border Gateway Protocol (BGP) on page 338, and Border Gateway Protocol (BGP) on page 338. To manually aggregate the range of IP addresses from 192.168.1.100 to 192.168.1.103 1. Convert the addresses to binary 2. Determine the maximum number of matching bits common to the addresses. There are 30-bits in common, with only the last 2-bits being different. 3. Record the common part of the address. 11000000 10101000 00000001 0110010X = 192.168.1.100 4. For the netmask, assume all the bits in the netmask are 1 except those that are different which are 0. 11111111 11111111 11111111 11111100 = 255.255.255.252 5. Combine the common address bits and the netmask. Alternately the IP mask may be written as a single number: 6. As required, set variables and attributes to declare the routes have been aggregated, and what router did the aggregating. Autonomous system (AS) An Autonomous System (AS) is one or more connected networks that use the same routing protocol, and appear to be a single unit to any externally connected networks. For example an ISP may have a number of customer networks connected to it, but to any networks connected externally to the ISP it appears as one system or AS. An AS may also be referred to as a routing domain. It should be noted that while OSPF routing takes place within one AS, the only part of OSPF that deals with the AS is the AS border router (ASBR). There are multiple types of AS defined by how they are connected to other ASes. A multihomed AS is connected to at least two other ASes and has the benefit of redundancy — if one of those ASes goes down, your AS can still reach the Internet through its other connection. A stub AS only has one connection, and can be useful in specific configurations where limited access is desirable. Each AS has a number assigned to it, known as an ASN. In an internal network, you can assign any ASN you like (a private AS number), but for networks connected to the Internet (public AS) you need to have an officially registered ASN from Internet Assigned Numbers Authority (IANA). ASNs from 1 to 64,511 are designated for public use. NAs of January 2010, AS numbers are 4 bytes long instead of the former 2 bytes. RFC 4893 introduced 32-bit ASNs, which FortiGate units support for BGP and OSPF. Do you need your own AS? The main factors in deciding if you need your own AS or if you should be part of someone else’s are: - exchanging external routing information - many prefixes should exist in one AS as long as they use the same routing policy - when you use a different routing protocol than your border gateway peers (for example your ISP uses BGP, and you use OSPF) - connected to multiple other AS (multi-homed) You should not create an AS for each prefix on your network. Neither should you be forced into an AS just so someone else can make AS-based policy decisions on your traffic. There can be only one AS for any prefix on the Internet. This is to prevent routing issues. Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU! Don't Forget To visit the YouTube Channel for the latest Fortinet Training Videos and Question / Answer sessions! - FortinetGuru YouTube Channel - FortiSwitch Training Videos
<urn:uuid:8499be3c-dbd6-4e61-9067-b2f33ee66902>
CC-MAIN-2022-40
https://www.fortinetguru.com/2016/06/dynamic-routing-terminology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00403.warc.gz
en
0.909871
1,253
3.765625
4
In the digital era, cybersecurity is a significant issue. We’ve all heard about major corporate data breaches, but threat actors are increasingly targeting small companies. Whether your firm is a multinational conglomerate or a tiny sole proprietorship, it is critical to have robust and secure cybersecurity protection. Many cyber thieves see small businesses as easy targets when it comes to obtaining clients’ personal credentials, bank accounts, and other sensitive information. These threat actors – hackers – can also hold organisations to ransom through malicious software called ‘ransomware’ where computers on a corporate network become infected and their data locked up through encryption. This unwelcome encoding uses a key that is only made available to the victim after they transfer (often substantial) funds, usually Bitcoin, to the attacker. Steps You Can Take to Improve your Cybersecurity So how do you stop these attacks from happening? Well the good news is that there are several quick and simple ways to start protecting your organisation from attackers. The first step is to make sure you’re keeping all the software on your computer systems, including the operating system itself, up to date. That is particularly important for Windows desktops and any servers you may have, though Macs are far from immune, they have traditionally been less of a target. Modern operating systems are pretty good at keeping themselves updated if you’ve selected that option. One of the fundamental routes for attackers to infiltrate your IT systems is through poor passwords. The first thing you should do is to ensure that no devices, including computers, mobile devices, routers, and firewalls, are using easy to guess or factory-default passwords. You should consider setting up a password policy. It needn’t be a daunting task as you don’t need to reinvent the wheel because many tech organisations like Microsoft and Google have already done the heavy lifting for you. As part of your password policy, it’s also an increasingly wise idea to institute two-factor authentication to provide an additional layer of security tied into your password policy. Using a Firewall The next step to take is usually to install a firewall. A firewall is tasked with monitoring incoming and outgoing traffic to assist in the prevention of cyber-attacks on your systems, and typically the first line of defence in ensuring the security of your network. There are broadly speaking, two main types of firewalls; software or hardware firewalls. A software firewall is just code that is installed on the computer itself, and watches for the incoming and outgoing traffic for potential issues. This is an adequate if not ideal solution for a home network, but for even a small business, a hardware firewall is recommended as software firewalls only really protect the device on which they are installed. The central advantage of a hardware solution is that it can protect every device on the network from a centralised location through which all traffic moves – like security guards at an airport checkpoint. How To Select the Right Firewall As a firewall is such an important step towards the protection against IT security threats for your organisations, it’s important to get the right solution in place. In fact, it’s possible your existing internet router has a simple firewall inbuilt. But much like a software firewall, these are usually sufficient for home users, but don’t have the capabilities of a dedicated hardware firewall. Choosing a dedicated firewall with the capabilities you need to safeguard your business from dangerous hackers, spyware, and viruses may seem complicated, so there are several things you should look to consider when making a choice: - Is it effective in protecting against attacks – a quick search for the model and reviews is a great place to start - Simple to set-up – unless you have dedicated IT staff, your firewall should be relatively easy to get set-up out of the box - Easy to manage – the more complex firewalls require a lot of effort to maintain - Established company – we’ve seen several manufacturers go out of business and therefore stop developing updates and patches for their firewalls, so cheaper isn’t always that way in the long run Pitfalls and Limitations of Firewalls A well maintained reputable firewall for small business is incredibly important for good cyber security, but they do have their limitations. They aren’t ‘intelligent’ so only do what they are explicitly told. This means that if they are instructed through their settings and configuration incorrect things, they can actually do more harm than good. As firewalls are essentially traffic lights for data saying what can go and what must stop, a misconfiguration can easily (and all too frequently) stop genuine use on the network by halting legitimate traffic. Furthermore, mobile devices are not always on the corporate network by their very nature, so can become infected and then rejoin the network. And as the device wasn’t connected to the firewall at the time of infection, the firewall can’t help prevent this intrusion. Lastly, far too many organisations don’t keep their firewalls up to date, meaning that new threats aren’t being addressed. Even an annual review of your IT security is better than nothing. So setting aside time to try and make sure you have updated all your systems and that their settings are optimally configured, including your first line of defence – your firewall – is vital for cyber security. Software updates, password policies, and installing a firewall are not a silver bullet. It’s the equivalent of locking your doors and windows; it’s important to deter many of the attempts that might otherwise occur, but they aren’t going to stop determined burglars. Internet 2.0’s solution offers far more protection than a typical firewall, and is managed by a team of ex-military and ex-intelligence cyber experts meaning you don’t have to hire dedicated staff to maintain your security. If you’d like to understand how you can ‘set & forget’ your cyber security solution, contact Internet 2.0 today for a confidential conversation.
<urn:uuid:8af44832-ff3c-4c5a-a508-440246f2c0cf>
CC-MAIN-2022-40
https://internet2-0.com/firewalls-for-small-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00403.warc.gz
en
0.943994
1,260
2.78125
3
Scammers launch thousands of phishing attacks every day and they’re often successful. The FBI’s Internet Crime Complaint Centre reported that people lost $57 million to phishing schemes in one year in 2020. This is a persistent, costly and escalating issue. Even for those who may consider themselves IT savvy, the tell tale signs of a phishing attack can easily be overlooked and the FBI is urging people to be alert to the signs, which can be anything from an email saying they have noticed some suspicious activity or login attempts to claims there is a problem with your account or your payment information. The FBI has created a list of top tips for keeping yourself safe from phishing attacks: Protect your computer by using security software. Set the software to update automatically so it can deal with any new security threats. Protect your mobile phone by setting software to update automatically. These updates could give you critical protection against security threats. Protect your accounts by using multi-factor authentication. Some accounts offer extra security by requiring two or more credentials to log in to your account. This is called multi-factor authentication. The additional credentials you need to log in to your account fall into two categories: - Something you have — like a passcode you get via an authentication app or a security key. - Something you are — like a scan of your fingerprint, your retina, or your face. Multi-factor authentication makes it harder for scammers to log in to your accounts if they do get your username and password. Protect your data by backing it up. Back up your data and make sure those backups aren’t connected to your home network. You can copy your computer files to an external hard drive or cloud storage. Back up the data on your phone, too. If you do fall foul to an attack the FBI says: Make sure you report the attack and the Federal Government can step in! The information you give can help fight the scammers. If you get a phishing email, forward it to the Anti-Phishing Working Group at [email protected]. If you got a phishing text message, forward it to SPAM (7726). Report the phishing attack to the FTC at ReportFraud.ftc.gov.
<urn:uuid:91c2217b-5184-44df-9277-9c2cefd0f3eb>
CC-MAIN-2022-40
https://cybermagazine.com/cyber-security/fbi-shares-tips-how-avoid-phishing-scams
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00403.warc.gz
en
0.923193
463
2.703125
3
The term anachronism is used for something misplaced in time. An example is classical paintings where a biblical event is shown with people in clothes from the time when the painting was done. The most frequent example of lack of timeliness, or should we say example of anachronism, in data management today is having an old postal address attached to a party master data entity. A remedy for avoiding this kind of anachronism is explained in the post The Relocation Event. In a recent blog post called 3-2-1 Start Measuring Data Quality by Janani Dumbleton of Experian QAS the timeliness dimension in data quality is examined along with five other important dimensions of data quality. As said herein an impact of anachronism could be: “Not being aware of a change in address could result in confidential information being delivered to the wrong recipient. “ Hope you got it.
<urn:uuid:520a7a47-fe48-4073-9e46-061496ea05a0>
CC-MAIN-2022-40
https://liliendahl.com/2013/11/21/anachronism-and-data-quality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00403.warc.gz
en
0.954745
192
3.015625
3
By Jane M. Orient, M.D., Tucson, AZ, Executive Director, Association of American Physicians and Surgeons (AAPS) The science of hanging was developed in the 19th century, to make executions more humane, quicker, and less error prone. The professional executioner was supposed to break the prisoner’s neck swiftly, not have him die slowly by asphyxiation—and without ripping off his head. For this purpose, the drop table was devised to calculate the length of the rope. The drop (the distance the culprit was supposed to be in free fall before the noose stopped him) depended on his weight. For a 200 lb man, a drop of about 5 feet was needed to develop the requisite 1,000 ft-lbs of energy. “Some bones” in Jeffrey Epstein’s neck were fractured, including the hyoid. The other bones of the neck are the cervical vertebrae, not so easily broken. But did the prison bedsheet do it? Epstein was nearly 6 ft tall. The upper bunk was less than 7 ft high. A rope attached there could not have allowed a sufficient drop to break Epstein’s neck, even if he somehow contrived to keep his knees bent. Could prison bedsheets be fashioned into a rope that could halt the fall of a 200 lb man? Why not get a 200 lb sandbag and find out? (New York City’s medical examiner on Friday ruled Jeffrey Epstein’s death a suicide by hanging. The 66-year-old was found dead in his Manhattan jail cell one week ago. Epstein was scheduled to be tried next year on sex trafficking charges involving underage girls. Epstein’s lawyers say they are not satisfied with the medical examiner’s findings, and that they are planning their own investigation. Courtesy of CBS This Evening and YouTube. Posted on Aug 17, 2019.) People have managed to hang themselves in prison, but I suspect that they died by strangulation. To assure the credibility of the autopsy, a famous 83-year-old pathologist observed the procedure. He is most famous for the Warren Report on the assassination of President Kennedy. Did the Kennedy autopsy miss a fist-sized exit wound in the back of the skull, through which much of the brain was extruded, or were surgeons and nurses who attended JFK at Parkland Memorial Hospital either mistaken or lying? If one wanted to squelch conspiracy theories, why not videotape the autopsy, and livestream it to several pathologists to guard against alteration or loss of evidence? So far, questions are multiplying faster than answers. (A will for Jeffrey Epstein was filed in the Virgin Islands before his death in federal custody. There’s new scrutiny on Epstein’s one-time friend Prince Andrew, as Buckingham Palace seeks to distance the Duke of York from the accused sex trafficker after British media published a video they says shows together in 2010. Courtesy of NBC News and YouTube. Posted on Aug 19, 2019.)
<urn:uuid:2045e3c1-31b6-46f1-b620-8feeff1d4158>
CC-MAIN-2022-40
https://americansecuritytoday.com/a-hangmans-view-of-jeffrey-epsteins-neck/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00403.warc.gz
en
0.968343
631
2.5625
3
Life before containerization was a sore spot for developers. The satisfaction of writing code was constantly overshadowed by the frustration of attempting to force code into production. For many, deployments meant hours of reconfiguring libraries and dependencies for each environment. It was a tedious process prone to error, and it led to a lot of rework. Today, developers can deploy code using new technology such as cloud computing, containers, and container orchestration. This guide discusses each of these technologies. It will also answer the question: “What are the differences between Elastic Beanstalk, EKS, ECS, EC2, Lambda, and Fargate?” - AWS Cloud Computing Concepts - Elastic Beanstalk - Elastic Kubernetes Service (EKS) - Elastic Container Service (ECS) - Elastic Compute Cloud (EC2) - Comparing Services AWS Cloud Computing Concepts Cloud computing refers to accessing IT resources over the internet. Resources include things such as servers, storage, deployment tools, and applications. The AWS term for cloud computing is “compute,” which refers to virtual servers where developers place their code. In AWS, developers choose the specifications for their server, such as: - Operating system Cloud computing offers several benefits, including: - Cost Savings: Companies won’t need to purchase servers to run their applications. - Time Savings: Developers don’t need to worry about managing the servers. The cloud computing vendor handles all maintenance tasks. - Security: The cloud vendor implements and manages all security tasks for the resources. - Scalability: Resources are accessed on demand. If the developer needs more resources, the cloud computing platform can automatically allocate whatever is required. - Flexibility: Developers can choose configuration options best suited to their needs. - Reliability: Computing vendors provide an availability guarantee (usually 99.99%) to ensure applications are always available. Containerization refers to a process of packaging code into a deployable unit. This unit is called a container, and it holds everything needed to run that code. From the application’s code to the dependencies and OS libraries, each unit contains everything the developer needs to run their application. The most widely known container technology is Docker, an open-source tool with broad community support. The benefits of containerization include: Easier Application Deployment Application deployment is one of the most basic yet effective benefits. With containers, developers have a much easier time deploying their applications because what once took hours now takes minutes. In addition, developers can use containers to isolate applications without having to worry about them affecting other applications on the host server. Better Resource Utilization App containers allow for greater resource utilization. One of the main reasons people deploy containers is because it enables them to use fewer physical machines. If someone has many applications running on a single machine, they will often find that it results in the under-utilization of one or more applications. Containerization helps deal with this problem by allowing developers to create an isolated environment for each application. This approach ensures that each app has the resources to run effectively without impacting others on the same host. It also reduces the chance of introducing malicious code into production. Containers provide a lightweight abstraction layer, allowing developers to change the application code without affecting the underlying operating system. Plus, the isolation attributes for applications in a container ensure that the performance of one container won’t affect another. One of the most significant benefits of app containerization is that it provides a way to isolate applications. This is especially important when hosting multiple applications on the same server. Containerization also simplifies deployment and updates by making them atomic. With containers, developers can update an application without breaking other applications on the same server. Containers also allow developers to deploy an updated version and roll it back if necessary. As a result, developers can quickly deploy and update their applications and then scale them without downtime or unexpected issues. Since containers isolate applications from one another and the host system, vulnerabilities in one application won’t affect other apps running on the same host. If developers find a vulnerability, they can address it without impacting other applications or users on the same server. Images are templates used to create containers, and they are made from the command line or through a configuration file. The file is a plain text file that contains a list of instructions for creating an image. The file’s instructions can be simple, such as pulling an image from the registry and running it, or they can be complex, such as installing dependencies and then running a process. Images and containers also work together because a container is what runs the image. Although images can exist without containers, a container requires an image to run. Putting it all together, the process for getting an image to a container and running the application is as follows: - The developer codes the application. - The developer creates an image (template) of the application. - The containerization platform creates the container by following the instructions in the configuration file. - The containerization platform launches the container. - The platform starts the container to run the application. As applications grow, so does the number of containers. Manual management of a large number of containers is nearly impossible, so container orchestration can step in to automate this process. The most widely known container orchestration tool is Kubernetes. Amazon offers services to run Kubernetes, which we’ll discuss later in the article. Docker also provides orchestration via what is known as Docker Swarm. How Does Containerization Work? The first step in the process is creating a configuration file. The file outlines how to configure the application. For instance, it specifies where to pull images, how to establish networking between containers, and how much storage to allocate to the application. Once the developer completes the configuration file, they deploy the container by itself or in clusters. Once deployed, the orchestration tool takes over managing the container. After container deployment, the orchestration tool reads the instructions in the configuration file. Based on this information, it applies the appropriate settings and determines which cluster to place the container. From there, the tool manages all of the below tasks: - Provisioning containers - Configuring applications - Lifecycle management - Managing redundancy and availability - Allocating resources - Load balancing - Service discovery - Health monitoring of containers Now, we move to a discussion of the differences between Elastic Beanstalk, EKS, ECS, EC2, Lambda, and Fargate. Continuing from the discussion above, Elastic Beanstalk takes simplification one step further. Traditionally, web deployment also required a series of manual steps to provision servers, configure the environment, set up databases, and configure services to communicate with one another. Elastic Beanstalk eliminates all of those tasks. Elastic Beanstalk handles deploying web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on servers such as Apache, Nginx, Passenger, and IIS. The service automatically provisions servers, compute resources, databases, etc., and deploys the code. That way, developers can focus on coding rather than spending countless hours configuring the environment. Elastic Beanstalk Architecture When deploying an app on Elastic Beanstalk, the service creates the following: - Elastic Beanstalk Environment: This is the runtime environment for the application. The service automatically creates a URL for access to the application and a CNAME. - EC2 Instances: These are the compute nodes for the application. - Autoscaling Group: It handles scaling the compute nodes. Although the autoscaling group handles provisioning, developers can configure how many to establish. They can also specify when autoscaling can start. - Elastic Load Balancer: It distributes web requests across the compute nodes. - Security Groups: Specifies what network traffic is allowed in and out of the application. - Host Manager: The host manager is a service on each compute node that monitors the node for performance issues. When web requests take too long to process, performance suffers. Elastic Beanstalk creates a background process that handles requests to avoid overloading the server. Worker Environments, a separate set of compute resources, process longer running tasks to ensure the resources serving the website can continue to respond quickly. Elastic Kubernetes Service (EKS) Containerization eliminates a tremendous burden from the developers. However, there’s an additional challenge they face: provisioning, scaling, configuring, and deploying containers. Depending on the size of the application, manually handling these tasks is overwhelming. The solution? Container orchestration. Amazon EKS is a managed Kubernetes service that helps developers easily deploy, maintain, and scale containerized applications at a massive scale. Amazon EKS replaces the need for manual configuration and management of the Kubernetes components, simplifying cluster operations. What Is Kubernetes? Kubernetes is an open-source containerization platform that automatically handles all tasks associated with managing containers at scale. Kubernetes is also known as K8s, and it does the following: - Service Discovery: Kubernetes exposes containers to accept requests via Domain Name Service (DNS) or an IP address. - Load Balancing: When container resource demand is too high, Kubernetes routes requests to other available containers. - Storage Orchestration: As storage needs grow, K8s mounts additional storage to handle the workload. - Self-Healing: If a container fails, Kubernetes can remove it from service and replace it with a new one. - Secrets Management: The tool stores and manages passwords, tokens, and SSH keys. The Amazon EKS infrastructure comprises several components that interact to perform container orchestration. Specifically, EKS architecture consists of: The master nodes are responsible for several tasks, including scheduling containers on worker nodes based on resource availability and CPU/memory limits. EKS Control Plane The control plane manages Kubernetes resources and schedules work to run on worker nodes. It includes the API Server, which handles communication with clients (e.g., kubectl), and an API server process that runs one or more controller-type operations in a loop to supervise work. EKS Worker Nodes Worker nodes run on EC2 instances in a VPC. A cluster is a group of worker nodes that run the application’s containers, while the control plane manages and orchestrates work between worker nodes. Organizations can deploy EKS for one application, or they can use one cluster to run multiple applications. Worker nodes run on EC2 instances in the company’s virtual private cloud to execute the code in the containers. These nodes consist of a Kubelet Service and the Kube-proxy Service. - Kubelet Service: The Kubelet Service runs on the cluster and handles communication between clusters. It waits to receive instructions from the API server and executes those instructions. - Kube-proxy Service: The Kube-proxy Service establishes and configures communication between services within the cluster. EKS VPC: Virtual Private Cloud This is a service to secure network communication for the clusters. Developers use the tool to run production-grade applications within a VPC environment. Elastic Container Service (ECS) ECS is an AWS proprietary container orchestration service. It is a fully managed service for running Docker containers on AWS, and it integrates with other AWS services, such as Amazon EC2, Amazon S3, and Elastic Load Balancing. Although ECS is similar to EKS, ECS does not automate the entire process. The main components of ECS are: ECS containers are pre-configured Linux or Windows server images with the necessary software to run the application. They include the operating system, middleware (e.g., Apache, MySQL), and the application itself (e.g., WordPress, Node.js). You can utilize your own containers that have been uploaded to AWS’s Elastic Container Registry. An ECS Container Agent is a daemon process that runs on the EC2 instance and communicates with the ECS service. It does this by using the AWS CLI to send API requests and Docker commands. To use an ECS Container Service, a user needs to have at least one container agent running on an EC2 instance in the VPC. This agent communicates with the ECS service by calling the AWS API and Docker commands to deploy and manage containers. An ECS task is a pairing of container image(s) and configurations. These are what run on the cluster. An ECS cluster is a group of EC2 instances that run containers. ECS automatically distributes containers among the available EC2 instances in the cluster. ECS can scale up or down as needed. When creating a new Docker container, the developer can specify CPU share weight and a memory weight. The CPU share weight determines how much CPU capacity each container can consume relative to other containers running on the same node. The higher the value, the more CPU resources will be allocated to this container when it runs on an EC2 instance in the cluster. Task Definition File Setting up an application on ECS requires a task definition file. The file is a JSON file that specifies up to 10 container definitions that make up the application. Task definitions outline various items, such as which ports to open, which storage devices to use, and specify Access Management (IAM) roles. The ECS Task Scheduler handles scheduling tasks on containers. Developers can set up the ECS Task Scheduler to run a task at a specific time or after a given interval. Elastic Compute Cloud (EC2) EC2 provides various on-demand computing resources such as servers, storage, and databases that help someone build powerful applications and websites. The EC2 architecture consists of the following components: Amazon Machine Image (AMI) An AMI is a snapshot of a computer’s state that can be replicated over and over allowing you to deploy identical virtual machines. An AWS EC2 location is a geographic area that contains the compute, storage, and networking resources. The list of available locations varies by AWS product line. For example, regions in North America include the US East Coast (us-east-1), US West Coast (us-west-1), Canada (ca-central-1), and Brazil (sa-east-1). Availability Zones are separate locations within a region that are well networked and help provide enhanced reliability of services that span more than one availability zone. What Type of Storage Does EC2 Support? EBS – Elastic Block Storage These are volumes that exist outside of the EC2 instance itself, allowing them to be attached to different instances easily. They persist beyond the lifecycle of the EC2 instance, but as far as the instance is concerned, it seems like a physically attached drive. You can attach more than one EBS volume to a single EC2 instance. EC2 Instance Store This is a storage volume physically connected to the EC2 instance. It is used as temporary storage and it cannot be attached to other instances and the data will be erased upon the instance being stopped, hibernated, or terminated. AWS Lambda is a serverless computing platform that runs code in response to events. It was one of the first major services that Amazon Web Services (AWS) introduced to let developers build applications without any installation or up-front configuration of virtual machines. How Does Lambda Work? When a function is created, Lambda packages it into a new container and executes that container on an AWS cluster. AWS allocates the necessary RAM and CPU capacity. Because Lambda is a managed service, developers do not get the opportunity to make configuration changes. The tradeoff means developers save time on operational tasks. Additional benefits are: Security and Compliance AWS Lambda offers the most security and compliance of any serverless computing platform. It meets regulatory compliance standards like PCI, HIPAA, SOC2, and ISO 27001. Lambda also encrypts all data in transit and at rest with SSL/TLS certificates. The service scales applications without downtime or performance degradation by automatically managing all servers and hardware. With Lambda, a company will only pay for the resources used when the function runs. Cost efficiency is one of the most significant benefits of AWS Lambda because the platform only charges for the computing power the application uses. So if it’s not being used, Lambda won’t charge anything. This flexibility makes it a great option for startups or businesses with limited budgets. Leveraging Existing Code In some cases, existing code can be used as is, such as a flask application, with very little adaptation. Lambda also lets you create layers, which are often dependencies, assets, libraries, or other common components that can be accessed by the Lambda function to which they are attached. The Lambda architecture has three main components: triggers, the function itself, and destinations. Event producers are the events that trigger a Lambda function. For example, if a developer wants to create a function to handle changes in an Amazon DynamoDB table, they’d specify that in the function configuration. There are many triggers provided by AWS, like the DynamoDB example above. Other common triggers would include handling requests to an API Gateway, an item being uploaded to an S3 bucket, and more. A Lambda function can have multiple triggers. Lambda functions are pieces of code that can be registered to execute or respond to an event. AWS Lambda manages, monitors, and scales the code execution across multiple servers. Developers can write these functions in any programming language and make use of additional AWS services such as Amazon S3, Amazon DynamoDB, and more. AWS Lambda operates on a pay-per-use basis, with a free tier that offers 1 million requests per month. Developers only pay for what they use instead of purchasing capacity upfront. This pay-per-use setup supports scalability without paying for unused capacity. When execution of a Lambda function completes, it can send the output to a destination. As of now, there are four pre-defined destinations: - An SNS topic - An SQL queue - Another Lambda Function - An EventBridge Event bus Discussing each of those in detail is beyond the scope of this article. If the code and its dependencies is sized at 10MB or over Lambda functions need to be packaged before they can run. There are two types of packages that Lambda accepts, .zip archives and container images. .zip archives can be uploaded in the Lambda console. Container Images must be uploaded to Amazon Elastic Container Registry. When a function executes, the AWS container that runs it starts automatically. Once the code executes, the container shuts down after a few minutes. This functionality makes functions stateless, meaning they don’t retain any information about the request once it shuts down. One notable exception is the /tmp directory, the state of which is maintained until the container shuts down. Use Cases for AWS Lambda Despite its simplicity, Lambda is a versatile tool that can handle a variety of tasks. Using Lambda for these items keeps the developer from having to focus on administrative items. The tool also automates many processes for an application that a developer would normally need to write code for. A few cases are: When the application uses S3 as the storage system, there’s no need to run a program on an EC2 instance to process objects. Instead, a Lambda event can watch for new files and either process them or pass them on to another Lambda function for further processing. The service can even pass S3 object keys from one Lambda function to another as part of a workflow. For example, the developer may want to create an object in one region and then move it to another. Handling External Service Calls Lambda is a perfect fit for working with external services. For example, an application can use it to call an external API, generate a PDF file from an Excel spreadsheet, or send an email. Another example is sending requests for credit reports or inventory updates. By using a function, the application can continue with other tasks while it waits for a response. This design prevents external calls from slowing down the application. Automated Backups and Batch Jobs Scheduled tasks and jobs are a perfect fit for Lambda. For example, instead of keeping an EC2 instance running 24/7, Lambda can perform the backups at a specified time. The service could be used to also generate reports and execute batch jobs. Real-Time Log Analysis A Lambda function could evaluate log files as the application writes each event. In addition, it can search for events or log entries as they occur and send appropriate notifications. Automated File Synchronization Lambda is a good choice for synchronizing repositories with other remote locations. With this approach, developers can use a Lambda function to schedule file synchronization without needing to create a separate server and process. AWS Fargate is a deployment option of ECS and EKS and doesn’t require managing servers or clusters. With Fargate, users simply define the number of containers and how much CPU and memory each container should have. AWS Fargate Architecture Fargate’s architecture consists of clusters, task definitions, and tasks. Their functions are as follows. AWS Fargate Clusters are a cluster of servers that run containers. When developers launch a task, AWS provisions the appropriate number of servers to run the container. Developers can also customize Docker images with software or configuration changes before launching them as a task on Fargate. Then, AWS manages the cluster for the user, making it easy to scale up or down as needed. A task represents an instance of a task definition. After creating the task definition file, the developer specifies the number of tasks to run in the cluster. What Are the Benefits of Fargate? Fargate is easy to use. Deploying an application involves three steps: - Configure the app’s environment. - Describe the desired state of the app. - Launch the app. AWS Fargate supports containers based on Docker, AWS ECS Container Agent, AWS ECS Task Definition, or Amazon EC2 Container Service templates. The service automatically scales up or down without needing any changes to the codebase. Fargate has many benefits, which include: - An easy, scalable, and reliable service - No server management required - No time spent on capacity planning - Scale seamlessly with no downtime - Pay-as-you-go pricing model - A low latency service, making it ideal for data processing applications - Integration with Amazon ECS, making it easier for companies to use both services in tandem The beauty of AWS is the flexibility it offers developers thanks to multiple options for containerization, orchestration, and deployment. Developers can choose which solution best meets their needs. However, with so many options, it can be difficult to know which option to use. Here are a few tips on how to decide between a few of these services. Elastic Beanstalk vs ECS Elastic Beanstalk and ECS are both containerization platforms, but the degree of control available is one key difference between them. With Beanstalk, the developer doesn’t need to worry about provisioning, configuring, or deploying resources. They simply upload their application image and let Elastic Beanstalk take care of the rest. ECS, on the other hand, provides more control over the environment. Which Option Is Best? ECS gives developers fine-grained control over the application architecture. Elastic Beanstalk is best when someone wishes to use containers but wants the simplicity of deploying apps by uploading an image. ECS vs EC2 AWS ECS is a container orchestration service that makes deploying and scaling containerized workloads easier. The Elastic Container Service supports Amazon Fargate launch types. With AWS EC2, users don’t have to configure or manage their container management as AWS runs and manages containers in a cloud cluster. Example Scenario: Moving From ECS to EKS There may be times when a developer wants to migrate from one service to another. One example might be migrating from ECS to EKS. Why would they want to do this? As we mentioned, ECS is proprietary to AWS, with much of the configuration tied to AWS. EKS runs Kubernetes, which is open source and has a large development community. ECS only runs on AWS, whereas K8s (that runs on EKS) can run on AWS or another cloud provider. Transitioning to EKS gives developers more flexibility to move cloud providers. It is possible to move existing workloads running on ECS to EKS without downtime by following these steps: - Export the ECS cluster to an Amazon S3 bucket using the ecs-to-eks tool. - Create a new EKS cluster using the AWS CLI, specifying the exported JSON as an input parameter. - Use kubectl to connect to the new EKS cluster and use a simple script to load the exported containers from S3 into the new cluster. - Start scaling applications with AWS elbv2 update-load-balancers command and aws eks update-app command or use AWS CloudFormation templates for this purpose (see an example here). - Once the user has successfully deployed the application on EKS, they can delete the old ECS cluster (if desired).
<urn:uuid:32b7ed99-9840-43f5-aa12-669abfaee517>
CC-MAIN-2022-40
https://www.logicmonitor.com/blog/what-are-the-differences-between-elastic-beanstalk-eks-ecs-ec2-lambda-and-fargate
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00403.warc.gz
en
0.893074
5,555
2.6875
3
The OSPF TTL security check is a mechanism that protects OSPF against remote attacks. When you enable this feature, OSPF will send packets with a TTL of 255 and rejects any packets with a TTL that are smaller than a configured threshold. By default, once you enable this it will only accept packets with a TTL of 255. Since routing decrements the TTL by one, this means that only OSPF packets from directly connected devices will be accepted. Let’s look at an example. I will use the following topology: Above we have two routers running OSPF, behind R2 is an attacker that wants to attack R1. It will do so by sending spoofed unicast OSPF packets destined to 192.168.12.1: H1 sends a spoofed OSPF, impersonating R2 and destined to R1. When R2 forwards this packet, the TTL will be decreased by 1 and R1 will receive the IP packet. Even if OSPF rejects the packet because the content is garbage, it still has to be processed by the control plane. If H1 sends enough packets, it might succeed in overloading the router’s control plane. To stop a remote attack like this, we can implement the OSPF TTL security check. By default, all OSPF packets have a TTL of 1 as you can see in the packet capture below: When TTL security check is enabled, OSPF will only accept packets with a certain TTL value, 255 by default. When it receives packets with a lower TTL, they will be discarded. Let’s give this a try. We can enable this globally for all interfaces like this: R1(config)#router ospf 1 R1(config-router)#ttl-security all-interfaces As soon as you enable this on one router, the neighbor adjacency with R2 will drop once the dead timer expires. Why? We can see the reason when we enable a debug: R1#debug ip ospf adj OSPF adjacency debugging is on On the console of R1, you will see this message: R1# OSPF-1 ADJ Gi0/1: Drop packet from 192.168.12.2 with TTL: 1 R1 will now only accept packets with a TTL of 255 and since R2 is sending OSPF packets with a TTL of 1, they are discarded. Let’s enable TTL security on R2 as well: R2(config)#router ospf 1 R2(config-router)#ttl-security all-interfaces The OSPF neighbor adjacency will recover and both R1 and R2 are now sending OSPF packets with a TTL of 255 to each other: Here’s a packet capture where you can see the new TTL value: Above you can see that the TTL is now 255. Since this is the highest value possible for the TTL field, it is impossible for H1 to send a spoofed unicast OSPF packet to R1, preventing a remote attack like this. The TTL security check is not applied to virtual links or sham links by default. If you want to use this, then you can use the area virtual-link ttl-security or area sham-link ttl-security commands. That’s all there is to it. You can read more about TTL security in the following RFC: The Generalized TTL Security Mechanism (GTSM) Want to take a look for yourself? Here you will find the final configuration of each device. hostname R1 ! ip cef ! interface GigabitEthernet0/1 ip address 192.168.12.1 255.255.255.0 ! router ospf 1 ttl-security all-interfaces network 192.168.12.0 0.0.0.255 area 0 ! end hostname R2 ! ip cef ! interface GigabitEthernet0/1 ip address 192.168.12.2 255.255.255.0 ! interface GigabitEthernet0/2 ip address 192.168.2.254 255.255.255.0 ! router ospf 1 ttl-security all-interfaces network 192.168.12.0 0.0.0.255 area 0 ! end
<urn:uuid:8dc11516-4034-4e24-bbf2-32e484cb4ab5>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccie-enterprise-infrastructure/ospf-ttl-security-check
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00403.warc.gz
en
0.847512
942
2.609375
3
There is one thing security engineers and new technologies ideally have in common: They make existing stuff more secure. For the security engineer, there is certainly a truth in this claim – for new technologies however, I’m not that sure though… Recently I wanted to improve my skills in HTML5 when I stumbled on some interesting new features a penetration tester (or an attacker, which in most cases does not make a huge difference) can abuse to exploit XSS-vulnerabilities. Of course there are also many more features that make other injections possible, but for XSS there are some very interesting ones. Until now, when you found a XSS hole within a input element that has filtered < and > you could not exploit it automatically without using CSS expressions – for example: <input type="text" USER_SPECIFIED_INPUT > This type of vulnerability was usually exploited using something like or similar. Anyway all of them work on a limited set of browsers only and are therefore not that interesting for a real exploit. So what about HTML5? No more CSS expression is needed – the magic is called autofocus: <input type="text" AUTOFOCUS onfocus=alert(0)> Nice – so who did expect new technologies to make users safer? This is just one example – have a look at Mario Heiderich’s “HTML 5 Security Cheatsheet” for many more of them… Finally – what are the lessons learned? - I (and every penetration tester as well as WAF/IDS-developer out there, too) definitively need to look into HTML5 - HTML5 offers many new features – one might also call it “new ways to attack a web user” So long – sc0rpio
<urn:uuid:54daacae-dfe2-4d44-bb1d-ccfda0ae476f>
CC-MAIN-2022-40
https://www.floyd.ch/?cat=8
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00603.warc.gz
en
0.938745
386
2.734375
3
Data breaches have been dominating the headlines lately, and there seem to be more cybercriminals than cybersecurity specialists. It’s time to level the playing field. Other than the current coronavirus pandemic, the world is also dealing with cyberattacks on an ever-increasing scale. There’s either a large-scale data breach or software vulnerability and exploit being reported every other day. Cyberattacks on businesses saw a dramatic rise in 2019 at 61%, a far cry from 45% in 2018. IT budgets have been dwindling, and companies are sorely lacking employees with relevant cybersecurity skills on hand to help mitigate the risks of a cyber incident. The Cybersecurity Skills Shortage We are in the digital age, and as the tech sector continues its unprecedented growth, there isn’t enough talent to go around. Tech companies the world over need talented employees, but development hasn’t been fast enough to supply the right people at the right time. However, it’s not a doomsday scenario for organizations that need help and can benefit job seekers looking to start a career in cybersecurity. Employers looking to beef up their ranks can enroll their current staff in cybersecurity workshops or more advanced courses if they’re qualified. Job seekers, on the other hand, can use this opportunity to develop the right skills to enter the world of information security. Skills Every Employee Should Have The cybersecurity sector has a lot of different job classifications and career paths. It would be nearly impossible to specify which skills are needed for each one. However, almost any area of cybersecurity will require an employee that possesses a particular set of skills that would make him or her a nightmare for hackers. A typical cybersecurity job opening advertisement would look something like this: - The candidate must be analytical and detail-oriented, with the ability to examine technical issues from all sides. - Must have excellent diagnostic and problem-solving skills. - Must have excellent communication skills to be able to explain complicated issues to clients and management adequately. - Must be able to work in a team environment. - Knowledge of governance and GDPR is a plus. - The candidate must have a strong foundation in IT core fundamentals, like system administration and web applications. - Knowledge of how to deploy security tools such as VPNs, anti-malware solutions, and identity theft protection to help mitigate the risk of identity theft. - Programming proficiency in Java, C/C++, assembly language, disassemblers, and two or more scripting languages is a must. - Must have a strong understanding of operating system architecture and administration. - The candidate must have a deep understanding of network security, including firewalls, network routers, and switches. - Must be well-versed in cloud security, risk management, big data analysis, and software management/patching. - Must be knowledgeable in security task automation and technical vulnerability assessment. Suffice it to say; employees need to have these core skills to enter the cybersecurity industry and become an expert. However, having hard and soft skills isn’t nearly enough, and past experience working in cybersecurity is an advantage. Real-world Knowledge and Experience While skills and certifications matter (especially on paper), nothing beats hands-on practical experience. An ideal candidate will possess real-world knowledge and the technical capabilities required to do the job well. Cybersecurity employees must be continually updated on the latest threats and current vulnerabilities affecting the industry. They should also keep abreast of the newest security trends, procedures, and practices standard throughout the world of cybersecurity. For instance, cybersecurity employees must need to know how databases and operating systems work, identifying vulnerabilities and weaknesses in the architecture so these can be better protected. Another requirement is a keen understanding of how attacks from the internet happen and how to prevent them, with a little ethical hacking prowess to demonstrate how an attacker would try to compromise a system. Cyberattacks are only going to grow because there’s not enough talent to keep up. Cybersecurity is only as strong as the people tasked to ensure that attacks are repelled, and threats are mitigated before they can do widespread damage. Organizations looking to improve their cybersecurity team can either hire new employees or train their current staff to fill in the gaps. For people looking to enter the industry, there is no better time than now to prepare, get skilled, and join the fight against cybercriminals.
<urn:uuid:691b12e4-1705-4f08-a20f-e2dde8f46c2e>
CC-MAIN-2022-40
https://www.cybintsolutions.com/cybersecurity-skills-every-employee-should-have/?ref=hackernoon.com
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00603.warc.gz
en
0.939104
916
2.65625
3
Local authorities have new safeguarding duties. They must: … carry out Safeguarding Adults Reviews when someone with care and support needs dies as a result of neglect or abuse and there is a concern that the local authority or its partners could have done more to protect them. The local authority has a general duty under the Children Act 1989 to safeguard and promote the welfare of children who are in need and, so far as it is consistent with that duty, to promote the upbringing of such children by their families by providing services appropriate to the child’s needs. How do you safeguard an individual? Ensure they can live in safety, free from abuse and neglect. Empower them by encouraging them to make their own decisions and provide informed consent. Prevent the risk of abuse or neglect, and stop it from occurring. Promote their well-being and take their views, wishes, feelings and beliefs into account. A health and social care practitioner can safeguard individuals by making sure that they are in a safe environment away from any abuse or harm. They can also safeguard individuals by making sure that they have a DBS check from the police to see if there is any background history. What does it mean to safeguard individuals? Safeguarding means protecting the health, wellbeing and human rights of adults at risk, enabling them to live safely, free from abuse and neglect. … It also means making sure that the adult’s wellbeing is supported and their views, wishes, feelings and beliefs are respected when agreeing on any action. Local authorities are multi-purpose bodies responsible for delivering a broad range of services in relation to roads; traffic; planning; housing; economic and community development; environment, recreation and amenity services; fire services and maintaining the register of electors. Under section 38 of the Crime and Disorder Act 1998, local authorities must, within the delivery of youth justice services, ensure the ‘provision of persons to act as appropriate adults to safeguard the interests of children and young persons detained or questioned by police officers’. What is your role and responsibilities in safeguarding individuals? It is the responsibility of people who work in Health and Social care to work in a way that will help to prevent abuse. This means providing good quality care and support and putting the individual at the centre of everything, empowering them to have as much control over their lives as possible. What are the 6 principles of safeguarding? What are the six principles of safeguarding? - Empowerment. People being supported and encouraged to make their own decisions and informed consent. - Prevention. It is better to take action before harm occurs. - Proportionality. The least intrusive response appropriate to the risk presented. - Protection. … - Partnership. … How do you promote safeguarding? developing good links with parents and carers and encouraging their involvement in the organisation’s work. promoting positive child-centred relationships between staff, volunteers and children. ensuring all staff and volunteers listen to children and respond to their needs. How does duty of care safeguard individuals? You have a duty to safeguard individuals, promote their wellbeing and ensure that people are kept safe from abuse, harm or injury. … However, it would not be your duty to take the matter into your own hands – for example, by confronting the family member yourself – as this lies outside your competencies. Where can you obtain local safeguarding adults support and guidance from? ∎ Social services: the adults’ services department of your local authority will be able to provide advice and support on safeguarding and protecting vulnerable individuals. What do you do in a safeguarding situation? Reporting Safeguarding Concerns: The First Steps Notify the child or young person that only the people who need to know will be informed. Don’t try to solve the situation yourself or confront anyone. Remember to take all claims seriously. Write up their narrative, giving as much detail as possible. How do you explain safeguarding? Safeguarding is the action that is taken to promote the welfare of children and protect them from harm. Safeguarding means: protecting children from abuse and maltreatment. preventing harm to children’s health or development. What are the 5 R’s of safeguarding? All staff have a responsibility to follow the 5 R’s (Recognise, Respond, Report, Record & Refer) whilst engaged on PTP’s business, and must immediately report any concerns about learners welfare to a Designated Officer. How do you help adults with care and support needs reduce the risk of harm or abuse from others? How to prevent abuse in vulnerable adults - Keep an eye out for family, friends, and neighbours who may be vulnerable. - Understand that abuse can happen to anyone although some people may be very good at hiding signs of abuse. - If a person’s isolation is an issue, discuss with them ways you might be able to help limit it.
<urn:uuid:3cec869a-7e86-481c-bf89-209dc7060d0d>
CC-MAIN-2022-40
https://bestmalwareremovaltools.com/physical/how-do-local-authorities-safeguard-individuals.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00603.warc.gz
en
0.95695
1,033
3.46875
3
Interest in persistent, shared virtual worlds has grown in recent years. A key player is Facebook, whose founder Mark Zuckerberg has said building a “metaverse” will be the realization of an idea he was interested in before he even dreamed of social networking. At the same time, Microsoft has announced that it is working on building the “enterprise metaverse.” But what does it really mean? Metaverses as a concept have existed for a long time – digital shared universes where we can take on whatever personality we want, or work together on collaborative projects. They haven’t always been depicted as good things. In Neal Stephenson’s cyberpunk novel Snow Crash, where the term first appears to have been used, it was a place people went to escape the dreary totalitarian reality of the real world they live in. In The Matrix movies, it is somewhere machines put us after we’ve become their slaves, so they can use us to generate electricity. Perhaps not the first ideas you would want Silicon Valley to take inspiration when it comes to their own visions for our future. However, it’s clearly a concept that we’ve been building towards since the emergence of the internet, social media, virtual reality, and early attempts at creating shared digital environments such as Second Life. Zuckerberg has described his conception of the metaverse as an "internet that you're inside of, rather than just looking at," which gives us some clues about how he is approaching it. The reason we’re having a serious conversation about metaverses now is that several key technology trends have reached a level of maturity where they will be up to the task. One of these is certainly virtual reality. Facebook has invested heavily in VR since acquiring headset manufacturer Oculus in 2014. It has made no secret of the fact that it doesn’t see the future of VR as being confined to the “walled garden” gaming and educational environments where it’s most commonly found today. Instead, the eventual goal is fusing VR’s ability to create virtual environments with the power of social media to create shared online spaces. This has been tried before – there are plenty of VR apps that allow socializing with friends, for example. But within a metaverse, the difference is that users won’t necessarily be limited to a narrow range of functionality that the app has been created for – chatting, or playing a game together, for example. Instead, players should be capable of virtually doing anything they might want to do. The key here is building simulated worlds that model as much of our environment and reality as possible. A bit like the world created in the science fiction adventure film Ready Player One. For example, walking onto a VR tennis court and picking up tennis rackets, it’s already perfectly possible for two people to play a game of VR tennis – as seen in a number of VR video games today. What if they don’t want to play tennis, though? They might decide they could have more fun chasing each other around the court trying to bash each other’s avatars with their rackets. Or digging up the tennis court and building a basketball court instead. Or just leaving the court and going to watch a concert, or do some work in your virtual office. A key feature of a metaverse is that it should cater for emergent user behavior, rather than being constructed for one specific application, like a VR tennis sim, or a collaborative working environment like Slack or Teams. Metaverses don’t need to be limited to one platform, as long as there is a shared, continuous experience. Your metaverse life might take you from immersive, VR environments, to 3D environments rendered on a conventional flat screen, to 2D applications on your mobile phone, depending on what you want to do. The important factor is that there is continuity between the activities and environments, in terms of the user experience and avatar you control. Everyone seems to agree that avatars will be a core part of the metaverse experience. To fit with Zuckerberg’s vision of “being in” the environment, there has to be some form of digital avatar of you for others to interact with. On Facebook or other social media platforms, your profile picture acts as your avatar. In a metaverse, it might be a 3D representation of you. In a gaming or fantasy metaverse environment, it might be anything you can imagine. But an important principle is that this avatar – or some element of it – will be able to move across and between different areas of the metaverse, and be recognizable as “you”, no matter what you’re doing or what platform you’re using. The metaverse and society It isn’t just improvements to technology that mean the idea of the metaverse is moving closer to reality. Since the start of the pandemic, many people have increasingly found themselves living their lives online. We have become increasingly used to working, shopping, and socializing digitally, so the idea of bringing all of these activities together in one seamless digital environment is not as much of a leap as it would have seemed just a few years ago. But these changes bring societal challenges, too. The shift to online living has undeniably enabled a lot of activity that can be damaging or unhealthy, from identity theft and fraud to trolling and abuse. There’s also a danger that real-life inequalities such as the wealth divide will be replicated inside the metaverse. Immersive 3D environments require a lot of computer power to generate, meaning that those with less budget to spend on headsets and computer equipment might have a better experience. This could end up having a negative impact on society if, for example, companies made hiring decisions based on a person’s presence in the metaverse, or it becomes a channel for the delivery of education, training, or even dating opportunities. How far are we from the metaverse? The companies speaking seriously about creating metaverses are all positioning it as an aspiration for the future. For now, it mainly serves as a concept model for ways that existing online environments – such as social media, or work-based environments such as Nvidia’s Omniverse can become more immersive and more deeply integrated into our everyday lives. Merging virtual reality with social networking is likely to be the first step. Facebook has recently spoken extensively about its plans to do this, and says that it expects it to become a reality within five years. However, it’s clear that there are still a lot of problems that need to be worked through before we’re ready to move our lives entirely online. While we may currently be used to carrying out many activities – shopping, entertainment, socializing, and working – in digital environments, we aren't quite at the stage – technologically speaking or as a society – where we’re ready to do the same with the bits that join them all together! For now, there are opportunities to get a bite-sized taste of what a metaverse experience might feel like. Epic Games has experimented with expanding the borders of its Fortnite gaming universe to include social events and concerts, most recently featuring Ariana Grande. Some people simply consider the metaverse to be the “next generation” of the internet – what “online” will look like when 2D screens eventually become redundant, superseded by headsets, or even lenses that project images directly onto our retinas. The truth is it’s still very much up in the air – no one knows for sure what the architecture and rules will be when connected, immersive environments become our online home. But with the biggest names in the world of tech racing to sell us on their version, we can expect growing excitement around the concept. If you would like to learn more about the rise of virtual and augmented reality, you might like to check out my latest book, Extended Reality in Practice: 100+ Amazing Ways Virtual, Augmented, and Mixed Reality Are Changing Business and Society.
<urn:uuid:8f621d35-e784-4871-882e-0e2168e339e9>
CC-MAIN-2022-40
https://bernardmarr.com/what-is-the-metaverse-an-easy-explanation-for-anyone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00603.warc.gz
en
0.963424
1,663
2.890625
3
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… Computers. What are they, exactly? Big hunks of plastics and precious metals? Yes. But what sets them apart from all the other hunks of plastic and metal around your house is their ability to recall and process data. Memory is one of the key components of modern computing. Without computer memory, our devices would have little use to us, making life look far different than it does today. But computer memory units are more than what stores a thousand pictures of your cat sleeping. Memory is also what lets your device perform routine functions like going online or creating a document. What are the types of computer memory? Primary and secondary computer memory are the two categories all other types of memory in a computer fall within. They perform very different functions and support the device in unique ways. Let’s use an analogy to help understand the difference. Think of computer memory as a row of cubbies in a preschool. When the kids arrive, they load up their cubbies with their jackets, backpacks and snacks—whatever they will need to use while they’re at school. When school is over, they (should) clear out their cubbies and take everything home, leaving their cubbies empty once again. This is how primary storage works. The information should be kept only as long as needed and released when the device is no longer in use. Conversely, at the start of every year, the teacher labels each cubby with the name of a student. This is the secondary computer memory. These labels remain on the cubbies for the entire year or until the teacher decides to manually make a change. Just like how data on secondary memory remains until the user specifically decides to do away with or change the information. This is an admittedly simple explanation of primary and secondary memory, but you get the idea. Now we can take a deeper dive into each type of memory in a computer. Primary computer memory This type of memory in a computer is located close to the CPU on the device’s motherboard. Due to the proximity, primary data can be quickly read by the CPU. Data is saved to the primary memory for immediate use, typically during a single session. Primary computer memory is divided into two main types: RAM and ROM. #1. Random Access Memory Most people know RAM as the type of computer memory that has the strongest impact on a computer’s operating speed. RAM, which stands for Random Access Memory, is extremely fast. And the more RAM you have, the faster your machine will be. RAM is volatile data, meaning all information stored in RAM is erased as soon as the machine loses power. Overall, this type of memory has a high cost-per-gigabyte. But it’s usually worth using if speed of recall is a concern. There are two types of RAM computer memory: - DRAM: This type of RAM is called Dynamic RAM and is the most common type of RAM used in computers. DRAM is the slower of the two types of ROM. And it’s usually cheaper! - SRAM: This type of RAM is called static RAM. As previously mentioned, SRAM is two to three times faster than DRAM and is usually bulkier and more expensive, as well. #2. Read-Only Memory ROM is the other type of primary computer memory and stands for Read-Only Memory. As the name implies, this type of primary memory can only read, not write, data. ROM is a very fast form of non-volatile memory, so information stored in ROM remains even after power is lost. ROM begins working as soon as the computer’s turned on. The drive usually has a “bootstrap code” stored in secondary memory that instructs the computer on how to engage the operating system. ROM also inputs parts of the operating system into primary memory for startup. There are several types of ROM used to carry out these functions: - PROM: Programmable Read-Only Memory is a little bit different from regular ROM because ROM comes programmed. PROM comes empty and is programmed later with a PROM programmer or burner. - EPROM: Erasable Programmable Read-Only Memory is a form of ROM that can be programmed, erased, then programmed again. But to erase an EPROM, you must remove the device from the computer and shine ultraviolet light onto the disk before reprogramming. - EEPROM: Electrically Erasable Programmable Read-Only Memory is similar to EPROM. This form of ROM is far more difficult to write and is usually considered read-only—though not strictly. Writing to EEPROM is a slow process and is only rarely done to update program code. These important forms of primary computer memory are what make your device function. They contain everything you need to compute but only while you’re computing. The moment your machine is turned off, the primary memory is erased and reset, because the information is simply no longer needed. Information stored on the secondary computer memory, however, is a much different story. Secondary computer memory Secondary computer memory is where you save things you want to keep. Data like documents, those photos of your sleeping cat, and videos. Almost everything about secondary computer memory is different from primary, and the contrast begins with the location. Unlike primary memory that is stored on the motherboard, secondary memory resides on a separate storage device connected to the computer system directly or through a network. Secondary memory is much more affordable than primary. But you get what you pay for, because the read and write speeds are much slower. In the case of secondary computer memory, slow doesn’t always mean bad. What’s considered slow by computing standards is often only fractions of a second to you. There are many types of secondary computer memory. You’re probably using a combination of these options for backup or simply as overflow. Here’s a breakdown of what’s available: - Hard Disk Drives (HDDs): HDDs were once the most popular form of computer storage but have become antiquated with the rise of solid state hard drives. - Solid State Drives (SSDs): SSDs are the most common form of secondary storage today. These drives use flash technology to quickly and efficiently store data. - Flash Drives: Flash is the technology used in standard issue USB (or thumb) drives to sophisticated SSDs. This type of storage is perfect when you want optimal portability. And unlike HDDs, since flash drives don’t have moving parts, you don’t need to worry about physical damage during transport. - Optical (CD or DVD) Drives: CD or DVD drives were once a very popular form of portable data storage. But today most everyone uses cloud-based storage solutions and smaller, less fragile forms of mobile data storage. Every computer has primary and secondary memory, but the configuration depends on you and your goals. No matter how your storage is configured, be aware of the risks of data loss. If your computer or drive has failed, and you’re struggling to access your data, contact DriveSavers today for professional data recovery.
<urn:uuid:c788524f-52b5-4946-9a5d-d2c09030c857>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/what-is-computer-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00603.warc.gz
en
0.949744
1,515
3.765625
4
IBM Lights Up Nanotubes The results were published in the May 2 issue of Science. The paper analyzes the test results, comparing them with theoretical models to establish that the nanotube did indeed create light, at a wavelength of roughly 1,500 nanometers. So far, no one's saying this is the future replacement for telecom lasers. The nanotube is an incoherent light source -- it emits in essentially random directions and therefore would be unsuitable for optical communications (see our Beginners' Guide: Laser Basics). It seems probable that nanotubes could be used to create a laser, but it's going to take a lot more research. "If one needs more light power, one could use these bundles with mirrors at the ends, and hopefully it could lase," says Phaedon Avouris, manager of nanoscale science for IBM Research in Yorktown Heights, N.Y. More immediate uses for nanotube light would be in close-quarter situations, such as on-chip interconnects. But Avouris stresses that any applications are a long way off. Carbon nanotubes are long, skinny molecules constructed of carbon atoms. Their diameter is just one or two nanometers, while their length can be more than a micrometer, a disparity that makes the critters practically one-dimensional. Nanotubes often get mentioned as a possible successor to silicon transistors in integrated circuits. It's assumed there is a limit to how small circuitry can get on silicon, and companies such as IBM and NEC Corp. (Nasdaq: NIPNY) are probing carbon nanotubes, which can behave as transistors -- as a possible next step (see NEC Gets Nanotubular). IBM appears to be the first to turn a nanotube into a light source, however. Other research has generated light from a nanotube, but only with the help of a laser. IBM's experiment involved simultaneously injecting one end of the tube with electrons and the other end with "holes" -- positively-charged analogues to electrons. When the two flows meet, they neutralize one another. "In the process, the energy can be either dissipated as heat or as light," Avouris says. Because the molecule can act as a transistor, the light can be turned on and off, and its intensity can be varied. And Photon Avouris believes it likely that different wavelengths can be generated using different diameters of nanotube. — Craig Matsumoto, Senior Editor, Light Reading
<urn:uuid:4d3f3fb7-831f-4ff9-954f-9464509cf753>
CC-MAIN-2022-40
https://www.lightreading.com/ibm-lights-up-nanotubes/d/d-id/591578
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00603.warc.gz
en
0.94799
531
3.9375
4
Labor day is behind us - we’ve officially reached back to school season. It’s time for new backpacks, supplies and for many schools….Google Apps for Education. Google Apps for Education is being used by K-12 and higher education institutions across the U.S and even the world. Backupify depicted the growth through our Growing up Google infographic. The Google Apps suite is helping students and teachers easily create and share Documents, Spreadsheets and Slides. Already familiar with Google Apps for Education? Below are a few ways to take it up a notch and really make the grade: 1. Create a Quiz using Google Forms Google Forms is a powerful tool. With a few mouse clicks, teachers can easily create a quiz for students. Google Forms allows for multiple choice and typed answers to capture just about any response to a question. All the results of the quiz are stored within a Spreadsheet for easy grading with just a few formulas. 2. Create an Online Discussion using Google Groups Google Groups is a free service offered by Google to create online discussions. When students collaborate on team projects, a Google Group can be used to help students share ideas, documents and information. Google Groups can really promote student conversations. For example, during my undergrad days at Northeastern University, I was required to log into Blackboard discussion board and post comments on particular topics as well as comment on my classmates’ thoughts which would factor into my participation grade. It’s simple to recreate that type of discussion board using Google Groups. 3. Receive Feedback from Students using Google Forms Google Forms will give you the ability to hear directly from students. The possibilities for receiving feedback on group projects, homework, field trips, etc. are endless. All results are captured in a Google Spreadsheet and come with a beautiful summary sheet. 4. Generate Status Reports with ease using Google Apps Spreadsheets Google Spreadsheets contains a super powerful tool called Google Apps Scripts. For the advanced user, a script can be coded to capture and collect information within a Spreadsheet and email it to a list of contacts. Imagine generating a status report for each student - showing all their grades for the quarter and emailing it to their parent/guardian at a click of a button? Check out my previous blog post for more information on how Google Forms and Spreadsheets capabilities can be expanded using Scripts. 5. Metrics for Student Performance using Google Apps Spreadsheets During the school year, a teacher leads a busy life. Student grades are often logged (hopefully in a Spreadsheet!) but how often are simple metrics being calculated in order to understand student knowledge retention? Take for example, a classroom of 30 students - after a test, ask yourself if you have an understanding of these types of questions: - What was the average grade? - How about the minimum and maximum grade? - What was standard deviation? - What is the distribution curve for the data? - Does the distribution curve follow “normal” values or is it skewed? - Did this year’s class perform better or worse than last previous years? - If you changed the course format that you created over the summer, is it showing an increase in the overall performance from last year’s? If you have Google Apps Spreadsheets, these answers, as well as bar/line graphs are easy to find through the use of Spreadsheet equations. These are just a few suggestions outlining how to get more out of Google Apps for Education. Have any other helpful tips? Share them in the comments section below and follow our Backupify blog for more Google Apps for Edu posts.
<urn:uuid:6fefe6b0-212c-4344-8bd7-377768d62c78>
CC-MAIN-2022-40
https://www.backupify.com/blog/5-ways-to-become-a-phd-in-google-apps-for-education
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00003.warc.gz
en
0.938493
752
2.640625
3
NEWPORT, R.I. (AP) — Experts in social engineering are meeting in Rhode Island to talk about ways people are getting hacked. Conference organizers say many recent data breaches involve phishing or another type of social engineering, where people are manipulated into divulging personal information such as passwords and credit card data. The Pell Center at Salve Regina University in Newport is hosting the conference on Saturday. Organizers say it's the first of its kind in New England. Attendees will discuss how these types of attacks work and ways to defend against them. One speaker will talk about how a bank manager was manipulated into giving access to the bank's computers. Patrick Laverty and Lea Snyder, who work in cybersecurity, created the conference. Laverty says humans are the weakest link in computer security.
<urn:uuid:db8b50dc-4c70-4f56-b7ff-0e6585b156e6>
CC-MAIN-2022-40
https://www.mbtmag.com/home/news/21102025/social-engineering-experts-discuss-hacking-people
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00003.warc.gz
en
0.963126
167
2.65625
3
As the digital transformation is imminent in the automotive industry, developing safe and secure automotive software is becoming increasingly essential. In this 15 minute video course, you will learn what role coding standards, rules, and guidelines play in the vehicle manufacturing industry, their differences, and how you can maximize efficiency by combining them. Coding Standards, Rules, And Guidelines Learn how coding standards, rules, and guidelines help develop safe automotive software It is no news that coding can be tricky and fragile, which is why having a set of coding standards, rules, and guidelines in place are needed to ensure good quality and reliable code in the automotive sector. Moreover, as there is not one-fits all coding standard, recognizing the different references and knowing how to combine them to ensure safe and secure vehicle software is key and the main goals of this video course. There are differences among coding standards, rules, and guidelines which are helpful to understand before deciding the right coding rule baseline. Therefore, you will be first introduced to the differences between them and be given examples of each. Shortly after, we will pay special focus into standards as they are crucial for consistent code quality and help minimizing vulnerabilities, which in turn decrease development time and costs. More specifically, we will provide general details on CERT and MISRA coding standards. Additionally, we will clarify how each standard supports the coding process in a side-to-side comparison. Finally, you will learn how applying static code analysis will help you ensure compliance with the CERT and MISRA standards. Who our Coding Standards, Rules, And Guidelines video course is made for Responsibility in quality management In this video course you will learn how coding standards, rules, and guidelines ensure uniform quality with reliable and safe automotive software code. By learning the practice-proven recommended methods to enforce coding standards, you’ll make compliance easier. After watching this video course, you will learn how to use coding guidelines to ensure uniformity and security from the beginning of the software project. Additionally, understand how combining different rules which help you identify gaps, avoid common mistakes, and save development time and costs. Coding Standards, Rules, And Guidelines – Video course content In the first part of the Intro to the Trends impacting Cybersecurity video course, you will get an introduction to what you will specifically learn in this video and why addressing trends in automotive cybersecurity is relevant. II. Coding Standards, Rules, And Guidelines Differences During this section, you will understand what the difference between coding standards, rules, and guidelines are and get an example of each concept. III. Coding Standards Overview We will focus on the coding standards, their benefits and how they can help ensure code is easier to maintain, safe, and reliable even when multiple developers are involved. IV. Coding standards for the automotive industry Here, we will give a detailed overview the two main references, CERT and MISRA, for embedded system design languages C and CC+. Additionally, we will also present the CERT Risk Assessment and MISRA Amendments and Addenda. Get a clearer understanding on how the principles of each coding standard can fit into the software development process to minimize vulnerabilities in a side-to-side comparison. VI. Enforcing coding standards To help you ensure coding standards are being enforced, we will explain how static code analysis can make code review and test phases more effective. VII. Summary and Outro At last, we will summarize what you have learned over the last 13 minutes of this video course and provide some concluding statements. More video courses related to Coding Standards, Rules, And Guidelines In this 15 minute video course you will learn what AUTOSAR cyber security stands for, which are the two major software platforms, its security modules, and the important role they play in the vehicle industry.
<urn:uuid:192278c3-afe1-446d-ad49-3296cba04091>
CC-MAIN-2022-40
https://www.cyres-consulting.com/academy/automotive-cybersecurity-online-courses/cybersecurity-implementation-video-courses/coding-standards-rules-and-guidelines/?lang=de
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00003.warc.gz
en
0.913963
785
2.890625
3
Car hacking is a topic increasingly discussed by the media and within the security community, it is crucial to understand the level if security offered by vehicles due the effects of the massive introduction of technology in our vehicles is car hacking. The term refers to the possibility that an attacker can gain complete control of the technological components within our cars. Modern cars contain upwards of 50 electronic control units (ECUs) that exchange data within an internal network. The safety of the automobiles relies on near real time communication between the different ECUs for predicting crashes, performing anti-lock braking, and much more. Recently, Charlie Miller, one of the most popular hackers, demonstrated working with Chris Valasek, director of security intelligence at IOActive, the possibility to hack a car by breaking into the control system of the vehicles. Cars are complex systems composed of numerous intelligence components that control different functions of the vehicle. The massive introduction of electronics requests a serious approach to the overall security of those parts. “Automotive computers, or Electronic Control Units (ECU), were originally introduced to help with fuel efficiency and emissions problems of the 1970s but evolved into integral parts of in-car entertainment, safety controls, and enhanced automotive functionality. This presentation will examine some controls in two modern automobiles from a security researcher’s point of view. We will first cover the requisite tools and software needed to analyze a Controller Area Network (CAN) bus. Secondly, we will demo software to show how data can be read and written to the CAN bus. “Then we will show how certain proprietary messages can be replayed by a device hooked up to an ODB-II connection to perform critical car functionality, such as braking and steering.Finally, we’ll discuss aspects of reading and modifying the firmware of ECUs installed in today’s modern automobile,” reports an abstract related to the presentation done at the Black Hat security conference in August 2013. Hackers and cyber experts are exploiting the possibility to exploit security vulnerabilities to interact directly with principal components of a vehicle, including braking and steering. Electronic Control Units (ECUs) and Controlled Area Network (CAN) Majority of the attacks are based today on “interferences” operated through the Controlled Area Network, the entry door for modern car hacking. Electronic Control Units communicate together on one or more bus, based on the Controlled Area Network standard, so hackers are developing methods to modify the ordinary behavior. The CAN bus (controller area network) is the standard in the automotive industry, designed to allow data exchange between ECU and devices within a vehicle without a host computer. The CAN bus is also used in other industries, including aerospace and industrial automation. In the automotive, ECUs exchange CAN packets, and every packet is broadcasted to all the elements on the same bus, this means each node can interpret it. The principal problem is that packets lack a sender ID and the protocol doesn’t implement an efficient authentication mechanism. This means that attackers can capture every packet, spoof the sender ID, and authenticate itself to the ECU, which does not correctly check the identity of the sender ID. The CAN protocol implements two different message frame formats: base frame format and the extended frame format. The only difference between the two formats is that the first one supports a length of 11 bits for the identifier, while the extended format supports a length of 29 bits for the identifier, made up of the 11-bit identifier (“base identifier”) and an 18-bit extension (“identifier extension”). CAN standard implements four types of frames: Figure 2 – CAN frame format (Wikipedia) CAN is a simple low level protocol that doesn’t implement any security features. The security must be implemented at a higher level; applications are responsible for implementation of security mechanisms. “Password mechanisms exist for data transfer that can modify the control unit software, like software download or ignition key codes, but usually not for standard communication.” A hacker sending specifically crafted packets to target ECUs on the CAN could be able to modify their behavior or totally reprogram the units. Today’s vehicles are equipped with connected computers that could be exploited by an attacker for various purposes. To prevent similar offenses, US auto-safety regulators decided to start a new office focusing on these categories of cyber threats. “These interconnected electronics systems are creating opportunities to improve vehicle safety and reliability, but are also creating new and different safety and cyber security risks,” declared David Strickland, head of the National Highway Traffic Safety Administration. Car hacking could be conducted to exploit new generation vehicles that are even more connected to the Internet, with each other and to wireless networks. We have to consider the fact that numerous companies are starting to think of cars as a node of an immense network that is able to acquire information from the environment to provide data useful for many services for the population of smart cities. Modern vehicles are equipped with sophisticated controllers that manage in real time an impressive amount of information. The controller of a luxury car has more than 100 million lines of computer code, while software and electronics account for 40% of the cost of the car. Every technological component in a vehicle and communication channel could be attacked by cyber criminals. What are the principal methods to hack a car? The most accredited methods of attacks are: Let me suggest the reading of the complete and original topic I have written for the Infosec Institute, it includes real data on the principal attacks conducted by hacking community and possible countermeasures suggested … (Security Affairs – car hacking, security)
<urn:uuid:3cf2b68d-6324-44c0-a5b8-ec59f849f39f>
CC-MAIN-2022-40
http://securityaffairs.co/wordpress/22727/hacking/car-hacking-safety-without-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337307.81/warc/CC-MAIN-20221002083954-20221002113954-00003.warc.gz
en
0.923053
1,151
2.90625
3
Ransomware attacks have become a significant threat to businesses worldwide. As organisations adopt remote working models to accommodate WFH, they expose networks to ransomware, creating devastating losses that undermine their reputations. In fact, the past year has seen ransomware attacks take down massive corporations with sophisticated networks. We will explore some ransomware examples and explain what to do to prevent such attacks in the future. Disturbing facts about ransomware – In 2021, there were 500 million ransomware attacks, with over 1748 attempted attacks per organisation. – The cost of a ransomware data breach has increased from $3.86 million (£2.89 million) to $4.24 million (£3.18 million)in 2021, a ten per cent increase in a year. – Customer Personally Identifiable Information (PII) costs over $161 (£120) per stolen record – Organisations with a 50% remote workforce took over 316 days to detect and contain the breach, compared to 287 days for organisations – Cybercriminals created over 300,000 new malware pieces to target individuals and businesses. – The number of global ransomware attacks increased by over 48% in the UK. How did ransomware attacks become widespread? While social engineering attacks have been on the rise, ransomware attacks have gained notoriety due to their frequent occurrence. This is because cybercriminals do not need a lot of resources to execute the attack. All that is needed is a small payload with little command and control communication to infect and control targets. According to security experts, anyone can buy and deploy different strains of ransomware designed for different platforms. For example, the “ransomware-as-a-service” market allows cybercriminals to buy ransomware kits for less than $100 and there are ransomware affiliates that help ransomware operators expand their capabilities. Furthermore, when networks are breached, brokers advertise access to the compromised network, giving more cybercriminals access to the network. Top ransomware examples and the lessons we can learn from them Kasey ransomware attack The Kaseya ransomware attacks were arguably some of the most significant ransomware attacks of the year; the cybercriminals responsible for the attack demanded over $70 million. It was the definition of a supply chain attack for the VSA remote software (used by 50 customers as a managed service provider), and as many as 1500 of their customers fell victim to the ransomware. Kaseya responded by alerting all customers and advised shutting down administrative access to VSA before taking servers and data centres offline. Key lessons from the ransomware attack The ransomware attack on Kaseya highlights the importance of remote monitoring and management (RMM) software, where access should be restricted and even offline for additional security. Companies should also place a bigger focus on supply chain security and regularly review supplier security standards to prevent another accident. There should also be appropriate user privileges to ensure that access to sensitive data is only available to a handful of users. Finally, it is important to analyse, update, and patch the supply chain to mitigate known vulnerabilities in software. Colonial Pipeline cyber attack On 7th May 2021, a group called DarkSide gained access to Colonial Pipeline’s network infrastructure through a compromised virtual private network (VPN) account password. As a result, the group was able to access and wall off valuable data and demand a ransom amounting to over $5 million. Colonial Pipeline was forced to pay $4.4 million although $2.3 million was later recovered. The attack demonstrated how even the network infrastructure of the largest oil supply pipeline in the US is not immune to ransomware threats, making ransomware a major threat and how it should be at the core of every business strategy. Key lessons to take away from the attack The attack demonstrated the importance of security mechanisms that strictly regulate access to critical systems. For example, multi-factor authentication requires access that is only granted when authorised users have given their approval. Multifactor authentication is an effective mechanism for mitigating ransomware attacks. Moreover, CIOs should implement network segregation policies and adopt zero-trust network architecture, which makes user access more exclusive, reducing the potential for an attacker. The ransomware attack on CNA Financial In March 2021, CNA Financial suffered a ransomware attack that forced them to pay a significant amount of money. While the company never confirmed the amount paid, sources cite that over $40 million was paid, making it the largest payment made to a ransomware attack. The ransomware uploaded was called PhoenixLocker, and according to CNA, it was uploaded by a threat actor. While the initial point of attack has yet to be confirmed, sources say that the ransomware entered the system through a malicious browser update delivered through a valid website and accessed data by exploiting known vulnerabilities. Key lessons to learn from the attack The main takeaway from this attack is the ease with which cybercriminals accessed vital data that rendered the high ransom paid. Of particular concern to CIOs should be how easy it was for cybercriminals to access data that was so important it forced them to pay a significant ransom. Moving forward, CIOs would need to be more stringent in allowing users when downloading updates from the browser. The best way to avoid ransomware attacks in the future While the ransomware examples we mentioned were devastating, they provided vital information and lessons on preventing such cyber breaches in the future. Organisations need to reexamine their supply chain security to improve endpoint detection and real-time monitoring to ensure that vendors or personnel do not inadvertently compromise the network. Furthermore, they would have to be stricter in what threat actors can do and what type of actions they can take on their workstations to ensure that cybercriminals do not access critical data through a single machine. With more organisations shifting to a hybrid working model, devising more effective security measures to prevent ransomware attacks will be critical for continued stability. The security improvement will be critical for preventing devastating monetary losses and protecting your business’s reputation, making sure your organisation doesn’t end up in the top ransomware examples list next. How RiskXchange can help RiskXchange is one of the firms leading the fight against cybercrime, coming up with novel solutions to everyday problems experienced at the hands of hackers. With full visibility over your eco-systems‘ entire attack surface in near real-time, you can regularly monitor and mitigate risks to prevent unnecessary exposures. Our passive data collection methods are effective and have no impact on your network performance. Using data-driven insights to prevent breaches is the best way to reduce an attack surface and prevent cyberattacks. Find out more here.
<urn:uuid:535af9a6-8a0a-4d21-a51b-4619b9bc8380>
CC-MAIN-2022-40
https://riskxchange.co/4903/top-3-ransomware-examples/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00003.warc.gz
en
0.94069
1,364
2.6875
3
Artificial intelligence student cyber safety monitoring helps districts keep students safe online and offline K-12 district IT teams need all the tools they can find to ensure student cyber safety. It’s important to keep students safe from online risks, and it’s also important for the district to remain compliant with regulatory requirements. Addressing cyber safety in schools requires thinking beyond web content filters since the risks are getting more difficult to spot. Types of Student Cyber Safety Risks Many districts are seeing gaps in their ability to spot toxic online behavior. The types of risks to student cyber safety include the following. Students are spending more time than ever online due to the remote and hybrid learning that is being used to help fight the COVID-19 pandemic. But, cyberbullying detection can be difficult because it takes so many forms. There are many opportunities for cyberbullying. Unfortunately, many of those opportunities exist while students are using district-supplied applications. A cyberbully can use apps in Office 365 and Google for Education apps to share negative, false, and embarrassing content about another student. As a result, students can’t get away from a bully. They can get unwanted contact at home through social media and the harassment can continue in school. The victims of cyberbullying have been shown to develop depression, eating disorders, suicidal tendencies, substance abuse habits, and more. Cyberbullying monitoring is critical since the result isn’t just a bit of teasing—it can be life-altering. 2. Inappropriate and/or Explicit Content Students can unintentionally end up viewing inappropriate content when they surf the internet. Districts need to be able to block websites that are known to serve this type of content to anyone, even minors. Beyond that, someone who is the target of a cyberbully may receive inappropriate content from the bully. 3. Sexting, Sextortion, and Online Predation The unhealthy practice of sexting can take place when students share explicit messages or images with one another. This practice can also lead to sextortion when one of the students threatens to share sexting messages or images with others unless the target student shares additional compromising content. If a hacker is involved in sextortion, they can create a form of ransomware by encrypting the target student’s files and demand sexting in return for releasing the files. This type of online predation can be devastating for a student. 4. Discriminatory and Hate Speech Ideally, schools are places where students should feel safe and welcome. When discriminatory and hate speech are allowed to flourish, that ideal is lost. School districts can’t allow aggressive speech against students of color, LGBTQ students, or any other minority or target group to be part of any student’s school experience. 5. Threats of Violence Threats of violence can come from students or cybercriminals. When cybercriminals gain access to a district’s student PII, there are documented cases where the criminals then threatened to release that information. They also contacted parents of students threatening to physically harm the children. This illustrates a strong link between cybersecurity and cyberbullying in K-12 schools. 6. Self-Harm and Suicide The saddest thing district IT teams need to look out for is indications of self-harm and suicide among the student population. Understanding these two issues is critical. Student self-harm and suicide aren’t the same thing. Students can act out by hurting themselves, even if they aren’t really planning on suicide. Students sometimes use self-harm to deal with depression and anxiety. Self-harm monitoring must include both text and images. In today’s digital world, district IT teams are becoming the first line of defense in supporting student suicide prevention. With the right monitoring tools, IT and/or support teams can spot students who may be thinking about suicide. IT can turn that information over to district resources such as teachers and counselors for follow up and resolution. Student Cyber Safety in District Technology Students are vulnerable to all of the cyber safety threats when using district technology. They’re using school-provided email, shared documents, shared drives, chat apps, and more. They use this technology to communicate with each other and share images, videos, and text content that could be harmful to themselves and other students. In addition, the district is responsible for CIPA compliance, and any cyber threat on district technology could be a compliance issue. Students make spotting K-12 cyber safety problems even more complicated because they’re often well-versed in avoiding detection. Students have learned to use white text and unnamed documents. They also might create and delete documents, rename them often, and use other methods to confuse teachers, administrators, and monitoring technology. Artificial Intelligence in Student Cyber Safety Artificial Intelligence (AI) is still in its infancy. However, many districts are finding that using AI-powered student cyber safety monitoring tools can help protect students and, in many cases, save lives. It’s a relatively new approach to a “people problem.” The most important thing to keep in mind is that students need to be educated on social-emotional learning to help them understand and manage emotions. They also need to be taught how to be a good citizen, both online and off. Monitoring for toxic online behavior, self-harm, and other types of risky behaviors helps keep students safe and helps them learn how to interact with others in positive ways.
<urn:uuid:ab46f8ee-430c-46d3-8881-bc895828f52e>
CC-MAIN-2022-40
https://managedmethods.com/blog/student-cyber-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00003.warc.gz
en
0.942091
1,149
3.203125
3
What is Liability Reduction? The National Safety Council estimates that more than 500 million PCs get relegated to scrap in the U.S. every year, and the volume keeps increasing dramatically. That’s bad news for the environment. The components in old computers contain hazardous amounts of toxic heavy metals, including lead, cadmium and mercury. While this won’t affect the person working on the PC, it can cause huge problems if the computer is discarded and sent to a landfill, where the toxins can seep into the ground. Recycling your equipment with Great Lakes Electronics is a much better option. You can reduce any possible liability from discarding your old equipment if you do it in a responsible manner – through recycling. We Maintain a Zero Landfill Policy We know how critical it is to our environment and our future to prevent used equipment from filling up our landfills and incinerators. Your company’s old computers can turn into an environmental liability. It doesn’t have to be that way… We can recover raw materials for reuse, then secure remaining scraps to make sure they get properly recycled. At Great Lakes Electronics, we help you turn those liabilities into assets with e-waste liability reduction!
<urn:uuid:dae6f679-d229-42fc-b174-ac0a9fef8c45>
CC-MAIN-2022-40
https://www.ewaste1.com/liability-reduction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00003.warc.gz
en
0.904362
261
2.640625
3
The backaches and headaches associated with school textbooks may soon be a thing of the past as more tablets find their way into the classroom. Apple’s much anticipated iPad Mini and Microsoft’s Surface are just two of the products that have the potential to change how teachers and students traditionally engage with textbooks. As more schools adopt innovative classroom software and hardware, the traditional image of the student may shift to the tablet-wielding pupil of the digital age. A recent GigaOm article discussed some of the advantages of digital textbooks, along with some lingering hurdles that have to be passed. According to the article, the learning experience that comes with being able to access digital textbooks on tablets is more engaging and capable of adapting to student needs. In addition, the article cited the findings of a Digital Textbook Collaborative report on the cost of shifting to digital textbooks. The report found that the shift would cost between $250 to $1000 per student per year. However, it also reported an estimated cost-savings of $600 per student, which stems from such factors as increased teacher attendance, reduced papers costs and online assessments. There are some hurdles that are complicating the speedy transition to tablet-driven education, including an infrastructural challenge. The article cited a 2010 FCC survey which found that 80 percent of schools did not have broadband connections that met their needs. Without more bandwidth, an influx of mobile technology could overtax school networks. Associated Press writer Josh Lederman reported that Education Secretary Arne Duncan recently weighed in on the future of digital textbooks. Speaking at a National Press Club, Duncan said, “Over the next few years, textbooks should be obsolete.” Duncan went on to explain that the conversion to digital textbooks would be a way to keep up with changing international education practices. The article noted that 22 states have already adopted policies that encourage the use of digital textbooks. Could iPad and Surface tablets be useful in the classroom or will they just become distractions? Let us know what you think in the comments!
<urn:uuid:f1410af1-f608-4953-b5f1-8dbb2910457d>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/digital-textbooks-coming-sooner-than-expected
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00003.warc.gz
en
0.962125
412
2.765625
3
People who use this option can no longer be recognized automatically in photos and videos. This change will also affect the tool Automatic Alt Text (AAT), which serves to create image descriptions for blind and visually impaired persons. More Than a Third of Daily Facebook Users Will Be Affected According to Facebook, this change is one of the most significant shifts in the use of facial recognition in the technology’s history, as more than a third of network users have opted in on this feature. The service termination will result in more than a billion facial recognition templates being deleted. Facebook Will Continue Developing Face Recognition Technology, but for Different Purposes The company will keep developing its facial recognition technology, as there are areas where it’s found to be useful, especially in identity verification, identity theft protection, and fraud prevention. The Automatic Alt Text system, which was relied on by people with impaired vision, will also undergo some changes. For example, the AI-generated image description will not include the names of the people in the photos. The Reasons Behind This Change Are the Growing Public Concern and Lack of Precise Regulation Jerome Pesenti, VP of Artificial Intelligence at Facebook concluded the blog article with the statement that facial recognition has benefits and drawbacks, and the company is carefully weighing them. In order to provide the best user experience and complete transparency, the company will work with the civil society groups and regulators who are leading the discussion about new technologies. With the new transparency policy, Facebook is trying to battle the consequences of the latest scandals that shook its core and maintain its position as the leading social network in 90% of the world’s countries.
<urn:uuid:ac104469-a663-4e21-94ab-c3fad63ad3c8>
CC-MAIN-2022-40
https://safeatlast.co/news/facebook-is-shutting-down-its-facial-recognition-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00203.warc.gz
en
0.930391
341
2.515625
3
Although an increasing amount of data is moving between networks, mobile devices, and the cloud, IT departments don’t always have the information they need to protect their association’s network and data from malicious attacks. Associations Now highlighted this disconnect with a sobering statistic from the Cloud Security Alliance: Just 8 percent of more than 200 IT and security professionals surveyed worldwide know the number of unauthorized apps currently being used within their companies – a phenomenon often called shadow IT. Security Threats are a Concern for all Associations Shadow IT is just one security challenge that IT staff must address. Since 2013, there’s been a 27.5% increase in data breaches, according to a report by the Identity Theft Resource Center. The Sony hack is one of the more recent headlines, but news-making IT security attacks date back to the early days of office desktops. Unfortunately, the entry points exploited by security threats in the ’80s, ’90s, and ’00s are still concerns for all organizations. Maybe you remember The Brain virus from 1986? It was the first of many Microsoft OS viruses that got into desktops and then networks when people copied files from infected floppy disks. Infected USB drives cause the same problems today. The next big scare came from the Morris worm in 1988 that exploited unpatched systems, in this case, a popular email server. From 1995 to 2000, an epidemic of infected attachments spread viruses through email contact lists, like obnoxious chain letters raining bad luck down on all the recipients. In 2008, we first saw hijacked web links downloading malicious executable files. In 2012, millions of Yahoo email addresses and passwords were stolen in an SQL injection attack. Because a Yahoo web application wasn’t written in a secure manner, malicious code was injected into it that allowed access to the application’s database. Since 2013, ransomware has become increasingly common. Victims of these extortion attempts may have to resort to paying a ransom to the attacker to get an encryption key that allows them to decrypt their files. It sounds like a sick online game, but it’s a serious cybercrime. It even made its way into primetime storytelling on an episode of “The Good Wife.” The 3 most common security threats we see today: - Malware infections – malicious software that’s installed without your consent, for example, viruses, worms, and Trojan horses. The Sony, Home Depot, and Target breaches were caused by malware. - Denial of service attacks – the inundation of a network or server with external communication requests with the intent of bringing it down and making it unavailable to its users. - Spam injections – code is injected into custom-written contact forms resulting in thousands of emails being sent out anonymously. The Home Depot and Target breaches are illustrations of what can happen when the security of a third party is compromised. In those cases, hackers stole and used vendor log-ins on extranet sites to get inside the companies’ networks and install malware that stole millions of credit card numbers and email addresses. If Sony, Home Depot, Target, and other Fortune 500 companies can be hacked, you can too. Your members have a transactional relationship with the companies they do business with. But they have different expectations for the relationship they have with you. They trust you with their data and privacy, but is that trust truly warranted? On the other hand, can associations trust that their members won’t become the entry point for malicious hackers like the Home Depot and Target vendors were? I know this all sounds overwhelming and depressing, but stick with me. In my next post, I provide suggestions for improving your organization’s security readiness. Can’t wait for the next post? Read about the delicate Balancing Act of IT security versus flexibility in Associations Now. Looking for more information about information security? We’ve got you covered. Check out our eBook The Cybersecurity Watchlist for Association and Nonprofit Executives for more information on threats to your organization’s security and how you can prevent your data from being compromised. Flickr photo by EFF Photos
<urn:uuid:1d3d1de4-d014-49f8-8bcd-d769caac91db>
CC-MAIN-2022-40
https://www.delcor.com/resources/blog/the-it-security-threat-landscape-for-associations
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00203.warc.gz
en
0.936252
857
2.5625
3
A new white paper from Enchanted Rock explores how the distributed generation technologies of dual purpose microgrids can offer both value to site hosts as well as stability to the grid at large. The introduction of renewable energy technologies to the grid “represents more than just a migration to cleaner energy technologies,” according to Enchanted Rock. It “changes the grid composition to be more dependent on highly variable resources such as solar and wind,” which can lead to challenges with both grid stability and reliability. The increasing number of electric vehicles in the U.S. market will also add to those challenges. Dual purpose microgrids can provide a resilient solution. The paper explains that while the U.S. power grid is reliable compared to many places in the world, the country is lagging behind other industrialized nations. The authors use the February 2021 extended outage in Texas as a prime example saying, “the damage to the Texas economy was on par with a Category 5 hurricane.” “Dual purpose microgrids are local power systems that offer sustained resiliency services to customer sites to survive long duration power outages, but which also provide support services to the larger grid and wholesale markets, which reduces the overall cost of each of these services.” – Enchanted Rock, “Enhancing Resiliency for the Energy Transition“ Enchanted Rock had a fleet of dual purpose microgrids in the state at the time of the outage. They report that without them, “143 customers, many of which are considered critical facilities, would have had outages lasting for as long as 4 consecutive days. All told, these dual purpose microgrids operated in support of onsite resilience and provision of grid services for the larger ERCOT grid network support for 8 consecutive days.” The authors explain how dual purpose microgrids work, saying that they “do not compete with grid supplied power, but instead displace our historical reliance on backup diesel generators, which represent one of the most polluting of all power generation options.” They also generate a new revenue stream by supporting the wider grid during times of crisis. Enchanted Rock also discusses the energy transition and shifting energy infrastructure priorities in the paper and how modular dual purpose microgrids can serve commercial and industrial customers, communities, and the larger electric grid. Download the full report to learn more about the distributed generation technologies of dual purpose microgrids.
<urn:uuid:acb9695e-53d1-4e9b-aa1d-e2839523e369>
CC-MAIN-2022-40
https://datacenterfrontier.com/dual-purpose-microgrids-provide-resilience-and-grid-stability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00203.warc.gz
en
0.95336
501
2.6875
3
Savvy CIOs have policies in place to protect their networks against infected USB flash drives. That’s because most IT professionals know the amount of damage that can be caused by plugging in such a device. For instance, Stuxnet, one of the world’s most sophisticated cyberweapons, is said to have gained access to its target system through a USB drive that someone found. Yet having policies — and making sure they are followed — can be two very different things. Read more: Top Cyber Security Threats to Organizations USB Drive Curiosity Killed the Cat In a study of 300 IT professionals — many of whom are security experts — conducted at the RSA Conference 2013, 78% admitted to having plugged in a USB flash drive that they’d found lying around. To make matters worse, much of the data discovered on those drives included viruses, rootkits and bot executables. Similarly, the U.S. Department of Homeland Security ran a test to see how hard it would be for hackers to gain access to computer systems. Staffers secretly dropped USB flash drives in the parking lots of government buildings and private contractors. Of those who picked them up, 60% plugged the drives into office computers, apparently curious to see their content. If the drive had an official logo, 90% were installed. Policies are useful, but without enforcement, they are not a successful measure. “Even with the knowledge of the potential outcome, curiosity can indeed kill the cat,” says Brian Laing, a security entrepreneur who had been a vice president at AhnLab, the IT security vendor which conducted the RSA Conference survey. “Policies are useful, but without enforcement, they are not a successful measure,” he adds. In addition to infecting systems, USB flash drives — which have become the floppy disk of the modern era — are a particularly effective tool for sharing files and thereby stealing data and trade secrets. An earlier survey of 743 IT and information security pros conducted by Ponemon Institute revealed that 70% have traced the loss of sensitive or confidential information to USB flash drives. Indeed, whistleblower Edward Snowden reportedly used a USB flash drive to smuggle files out of the National Security Agency (NSA) despite policies against using the devices. “The NSA could have installed USB port-blocking software to restrict and track usage of USB-connected devices,” says David Jevans, chairman of Marble Security and the Anti-Phishing Work Group (APWG). “Despite the NSA’s having a policy of not allowing these devices, they didn’t have the security software installed to prevent it or to restrict usage to secure devices.” While such data losses can obviously occur when the devices get lost or stolen, 55% of the incidents in the Ponemon Institute survey were reported to be likely related to malware-infected devices that introduced malicious code into corporate networks. Read more: 5 Best Practices to Prevent Cyberattacks Best Practices for USB Drive Security But the fact that many people don’t follow USB policies is no reason not to have them, say security experts. Here is a checklist with the experts’ best suggestions for effective USB flash drive management: - An important first step is to raise awareness among employees, says Sebastian Poeplau, resident USB expert at The Honeynet Project. “Most computer users aren’t aware that USB drives can impose a risk on their machine, so user education is essential.” - File sizes have increased and email doesn’t always allow for sharing large files. If you want to minimize or restrict employees from using USB devices, provide a good alternative way for them to share files internally. - Restrict usage of USB flash drives to company-authorized devices. Not allowing employees to use USB flash drives from external sources at their work machines is the simplest method of avoiding malware that may come from infected PCs at home, at copy shops, and so on. - Allow only USB devices that are connected to a remote management system that enables you to track usage and to lock the device or delete data from the device.
<urn:uuid:bb145de2-6239-4040-aa22-dd0562eb4251>
CC-MAIN-2022-40
https://www.cioinsight.com/blogs/the-dangers-of-unsecured-usb-drives/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00203.warc.gz
en
0.951397
853
2.765625
3
When something goes wrong on the production line, it can cause many headaches for workers and managers. This means that products are not being made properly, which can also lead to safety issues. Andon Board is a visual management tool that helps quickly identify and solve production line problems. It uses color-coded lights to indicate the severity of the problem. This allows workers and managers to address the issue immediately. This article will briefly overview the andon board and explain its key features. Andon Board Definition Andon Board is a display board used in the manufacturing industry. It indicates the existence of a problem at a specific workstation. The issue might be technical or related to quality. The worker can push the button and notify the managers or other workers when he finds an issue. What does the word Andon mean? The term “Andon” comes from the Japanese word for the paper lantern. The Japanese call their form of quality control the “Andon” system. Andon means “guiding light,” which they use to acknowledge that jobs are done without interruption. Toyota first discovered it, which utilized the Jidoka quality control method. It has now become an indispensable part of the Lean approach. Andon meaning in manufacturing industries is “the status display.” Jidoka is a Japanese term. The meaning of ‘Autonomation‘ is ‘Automation with a human touch. The system gives more authority to the workers. For example, they can stop production when they detect any particular defect. Initially, these were light signals in the manufacturing process. They signaled the status based on the color. Over time these boards have evolved and are with far more superior technology. Modern alert systems use different modes to highlight the issues. Examples are pre-recorded verbal messages, text, and other graphic elements. Though displays are sophisticated, their purpose has not changed: efficient communication and real-time status of the plant floor. Andon system examples What is an example of Andon in everyday life? Your car’s dashboard is the best example. It warns you about the existence of fuel, whether it is complete or finished. One more example is when a xerox machine or scanner has problems. It warns the operator with a signal light. Operator controlled andons Manual operators in the assembly line trigger the andons. They may do it by using static buttons, pulling the cord, or using a voice command in a few modern systems. Machine controlled andons These are automatically activated andons. When criteria fixed in the assembly line are not met or lacking, it is triggered. Andon colors and their meanings - Green, production is running correctly and will move to the next level - Yellow, problem detected, the operator needs assistance from an expert to fix the problem - Red production stopped since the solution for the problem is not identified and needs further investigation - White, production run completed. The next run could be scheduled if required. - Blue, defective unit, this need not stop the process; even the number of faulty units could be displayed. Uses of andon board in lean manufacturing plants What is andon in lean manufacturing? Andon in lean manufacturing is a system that warns workers, operators, and managers about problems in real-time. With the help of alerts, immediate corrective action can be taken. It helps in addressing process or quality control issues with immediate attention. That further observes the reduction of production bottlenecks. It clears any glitches that might lead to slow production. It also avoids complete production halts soon. - Visibility, the possibility of fixing problems of downtime, quality, and safety in real-time - Productivity, attending to problems as they occur in the manufacturing process increases productivity - Accountability, effective delegation of accountability, and responsibility to operators - Up-time, since it helps in quicker identification and fixing manufacturing issues, downtime is reduced - Efficiency provides an effective and consistent way of communication. That helps in saving money and time Let me explain it with scenarios. In a car manufacturing industry, the production of vehicles takes place along with the indication light down the production line. Therefore, the lights will be red, green, and blue. When an employee fixes a part of the car correctly, the indicator light goes from blue to green, and the vehicle goes continuously down in the production line. If the employee fixes the part incorrectly, the indicator goes red, showing yellow light to show that the problem is detected in that production line. If the employee cannot fix it correctly and quickly, the production stops, and the team members gather and solve the problem in that production line. It is used in web-based company also. But, of course, you all know the very famous online shopping site Amazon. It uses andon for its customer service because providing good customer service is its top priority. Thus, Amazon calls it “customer service andon cord.” Whenever a problem occurs in customer service, this cord alerts the manager to sort out the problem. Here andon is not aboard. It is a digital system. Advanced features of modern systems - Integrates with enterprise resource planning (ERP), computerized maintenance management system (CMMS), and manufacturing execution system (MES) - Making extensive use of the internet of things (IoT) - Issuing tickets with workflow - Mobile client applications to get alert notifications - Email, SMS, and mobile application push notifications - Automatically track production counts and cycle time - Big data analysis and reporting system about recorded events of andon inputs Advantages of Andon board - Saves time: Since andon board is a visual display, it allows you to understand the situation and problems in the production line. So that it helps to reduce the problem detecting time, avoids stopping the production line, and saves time. - Saves money: You know that in business, time is money. So when process time reduces, it helps to reduce the production costs and leads to profit. - Increases communication: It helps flow the information throughout the production line through visual or sound signals. Hence it improves the internal communication between workers, supervisors, and managers. - Controls process: The production andon board forecasts the problem and allows the maintenance team members and higher management to have a perfect hold on the production line. - Collects accurate information: To support an effective management system, information about the function of the production line should move from lower-level workers to a higher authority. Andon board helps to collect accurate information about the production line. - Decreases hesitation: During production, sometimes workers hide issues. But when you start using andon, workers will be encouraged to find and solve the problems immediately. It helps to enhance downtime, product quality, and safety. - Enhances productivity: It allows the workers or operators to take immediate action when they find the problem without waiting for a higher authority. This rapid determination of the issues and solutions helps to increase productivity. - Zero interruption: It helps to move the assembly line without any interruption. What will be the future? Andon board manufacturers are looking for ways to improve the traditional system while maintaining simplicity. While many andons can be used in lean manufacturing plants, the most common type is a red light with an audible sound or buzzer. In the future can expect complete Automation of detecting the problems and notifying them in manufacturing plants. They are building machine learning models based on andon inputs for different industry verticals. With these models, input patterns could be analyzed better for future enhancements. Replace cords, buttons, and screens in legacy systems with modern systems will be inevitable. The Andon Board is a critical element of lean manufacturing. It can serve as your early warning system for potential problems. It would help if you never forgot the importance of having an engaging, informative board that employees can easily read from anywhere in the plant. This blog post has given some background on what an Andon Board is and how it can increase efficiency and reduce waste by providing feedback and information about production levels during the day or night.
<urn:uuid:c9ee9e16-5abe-4f8a-8335-364140c1e6d7>
CC-MAIN-2022-40
https://www.erp-information.com/andon-board.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00203.warc.gz
en
0.92139
1,722
2.671875
3
Let’s start at the beginning. What is a Markup Language? Simply put, it’s a way of writing something on a computer that might help it present as visually more appealing to the end user. For example HTML allows web designers to change font size, background color, and introduce paragraphs. There are other markup languages like XML and JSON. These are often used to provide input information to applications, but are commonly described as formats for data interchange. An example of using XML or JSON would be to use a REST client like Postman to talk to the APIs of a network switch and either get or modify information. XML or JSON would be the format you use in the body of the REST client to specify what you want to happen (GET or POST). YAML Ain’t Markup Language Well, it’s a little like a markup language. Originally it was called Yet Another Markup Language, but was re-branded to make sure there was more of a spotlight on the configuration file aspects and data serialization of it rather than creating documentation formatting/markup and it’s often used in similar ways as XML and JSON. Basic Data Structures Let’s talk about what this looks like. We could have Key Value Pairs. The key, which is the word before the colon, and the value, the word after the colon. These two items are linked: We can also have Arrays or Lists. A little like a Key Value Pair, but we might have several items in a list as shown here: The category at the top describes what kind of data it is, and then we add a hyphen followed by some spaces to specify the different items that go under that category. Arrays are considered to be an ordered data structure because it matters where the item falls on the list. If we were to reverse van and car that would be considered to be a totally different list since it’s in a different order. Then there are Dictionaries which tell us about a certain item. We can also combine each of these data structures (Dictionaries, Lists, Arrays) to use together. We can have an array or list of dictionaries for example. Dictionaries are unordered. It doesn’t matter if I have another dictionary where the order of wheels and doors were reversed. I could still compare wheels or doors for both dictionaries. Indentation is very important in YAML. You must have the same amount of spaces for each item in an array or in the dictionary or else the application will be confused as to what belongs to what. It wouldn’t make sense for doors to belong to wheels in the example shown above, but that’s what would happen if we had one extra space to the left of doors. If you’re familiar with coding at all, you’re probably aware of using a # to “comment out” something. This is often used for documentation purposes within a file or within code. As long as we put a # before something, the computer ignores it. Great…How Do I Use It? We know a little bit about how to set up a YAML file (file.yml) but what do we do with it? So, we can use a .yml file to store configurations to be used later. It’s a file that holds the actual configurations we want and we can re-use or modify and re-use multiple times. A .yml can be used as a configuration file along side your code. For example, you can include it as an imported file in a .py (Python file) or as a required file in Ruby. We can also use .yml files as a Playbook with Ansible. Which may give you a hint as to where I’ll be heading with this blog in the near future. As I begin my new role in product marketing at Juniper, I’m actually finding myself getting back to being more technical. While I’ve dabbled with coding and orchestration tools for the last several years it’s now something I’m going to be concentrating on almost full time. No ego here…if you see a mistake or something I could flesh out better, feel free to @Malhoit me.
<urn:uuid:3f3a4d39-0bb9-4194-a7df-650a30391eec>
CC-MAIN-2022-40
http://adaptingit.com/yaml-aint-markup-language/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00203.warc.gz
en
0.93133
952
3.484375
3
Cybercrime costs the world $6 trillion as per 2022 estimates. That’s more money than the GDP of a few countries. Organizations must step up cybersecurity measures to protect their business-critical data, which could be critical to their survival. 43% of all cyber-attacks target small businesses, and even a single attack can cause have devastating effects. According to Microsoft, 60% of cyber-attacks begin with a breached device. In addition, 70 percent of small firms go out of business within a year of a large data loss incident, which can be caused by viruses, physical damage, or human error. A physically separate, secure archive/copy of your business-critical data is your best line of defense against cybersecurity vulnerabilities. In the following, there’s information on the importance of cyber security, data archiving in cyber security, and how it can help. What is Vulnerability in Cyber Security? Cyber security is a blanket term used to describe the practices, processes, controls, programs, and tools used to protect your IT assets and tools. Vulnerability meaning in cyber security, can be defined as gaps, loose ends, and liabilities in your IT infrastructure that can serve as entry points for attack vectors. You can also read these as “chinks in the armor.” In an increasingly digitized world, as more data goes online, cybercriminals have more opportunities to try and exploit vulnerabilities in your cyber security to their advantage. Cyber-attacks are ever-evolving in sophistication and volume, and hence it’s an ongoing challenge to maintain a robust cyber security system for your valuable data. Different Types of Vulnerabilities in Cyber Security As the threats evolve, so should cyber security measures. Your organization can fall prey to many different types of vulnerabilities in cyber security, which can lead to system hijacks and data breaches. Some sources of these vulnerabilities are: - Unpatched or Outdated Software. - System Misconfiguration. - Malicious Threats by Insider. - Missing, Improper, or Weak Authentication credentials. - Incomplete Authorisation policies. - Zero-day Vulnerabilities. - Missing or poor Data Encryption. What is the Importance of Data Protection in Cyber Security? Cyber security employs many methods, processes, tools, software, and other protective measures to protect your systems and your most valuable asset – your data. Robust Data Protection is a foundational line of defense against data theft, damage, and loss among these measures. An independent, safe, tamper-proof repository of all your data insulates you against the impact of any cyber-attack. And data protection contributes positively to business continuity, organization growth, and risk mitigation. Telling customers about how you keep their critical data safe can build trust and reputation. What role does Email Archiving play in Cyber Security? Email being the primary form of digital communication and data exchange in an organization, is one of the main targets of cybercriminals. IDC says that business emails carry 60% of business-critical data. These could be confidential information, trade secrets, company finances, product details, and many other types of confidential information. Cybercriminals send Phishing Emails, Trojan Horses, Malware, Viruses, and Worms through emails. These phishing emails attempt to attack, steal and destroy your data and cause your organization irreparable damage. However, cyber security has a solution – email archiving. Email archival storage in cyber security is a vital aspect of a robust security architecture. Having a central, consolidated cloud repository carrying a physically separate copy of all your active and legacy email is one of the best ways to safeguard the data. Related: Why Archive email How Email Archiving Shields Data from Cyber Security Vulnerabilities Archiving might sound similar to backup, but it is not. A fundamental difference is that email archiving captures data continuously, as it is generated and not after the fact, whereas a backup is periodic. This difference ensures that archiving captures and preserves ALL your data, irrespective of your users’ actions in their mailboxes. Other notable benefits of cloud-based email archiving: When, where, how, and by whom, archiving records metadata, making it a lot more valuable and important. You also have the freedom to decide how detailed you can be with your metadata. Legal and Compliance readiness Regulated industries, like healthcare and financial services, must have email and data archiving practices in place. A robust, search-ready email archive can support fast and accurate ediscovery of evidence to support legal and compliance readiness. Excellent Email Management System Archiving with self-service access can double up as an email storage management system. Your older emails are removed from the active mailboxes and available in safe archives via a self-service ediscovery application. Role-based access for additional security Amongst the top cyber security vulnerabilities is the human aspect. Not every employee needs access to archived data. Organizations must create a hierarchy granting a varied level of access to certain members in the organization. These could be auditors, compliance officers, or members of the legal teams. Data breaches can result in downtime, which can be very disruptive. Minimizing the visible surface area of data by receding infrequently used critical data to cold stores is a vital aspect of building cyber resilience. Leveraging cloud tools that support storage tiering with robust discovery tools can support your cybersecurity strategy without compromise. Email Archiving Solutions Must Be Secure The email and data archiving solution you choose for your organization must be secure. It should encrypt all your data files and protect them from external attacks. The storage location should also be secure yet easy to access for you. The email archiving solution should also protect from internal threats by restricting access, monitoring access requests, and using unauthorized tools and software. Vaultastic is a next-generation platform that protects your business-critical in the cloud using storage tiering and a multi-layered shared security model at 60% lower costs. Vaultastic is cloud-native on AWS, highly scalable, and seamlessly integrates with all major email service providers. Is cyber security important? Yes. Irrespective of your organization’s size, you have to invest in cyber security. But even that may not be enough. Even the most technically advanced cyber security systems can develop chinks over time. Maintaining a physically separate copy of your data using a cloud email archiving platform is one of the best defenses against relentless cyberattacks. It ensures business continuity and reduces data risk even in a compromise on the primary system. Why not give Vaultastic a spin? Sign up for a no-obligation 30 day free trial and experience new-age data protection.
<urn:uuid:0d69ca48-6eee-4d7e-bd9e-f9b164a1aa1d>
CC-MAIN-2022-40
https://vaultastic.mithi.com/blogs/how-archiving-shields-data-from-cyber-security-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00203.warc.gz
en
0.910439
1,418
3.140625
3
The Future of AI and Automation Artificial intelligence (AI) and the automation it enables will significantly disrupt the global economy. Some people fear robots will replace most of the jobs employees perform. Due to the ways automation can cut down or eliminate humans’ involvement in repetitive tasks, will these technologies permanently change how employees work? Analysts Say AI Will Boost the World’s Economy by Trillions of Dollars It’s not hard to see how AI could help the global economy, concerning the new, high-tech jobs it creates for skilled humans and the technologies it offers that increase efficiency. According to a report from PwC, AI could add up to 15.7 trillion dollars to the worldwide economy by 2030. That investigation also describes what will cause the gains. It says $6.6 trillion will come from increased productivity. The remaining majority will occur due to the ways consumers around the world can get higher-quality and increasingly personalized goods. Automation Could Worsen Global Income Inequality Most people in developed countries are aware of pervasive income inequality around the globe, even if it pops into their minds when they buy discount clothing and realize someone in a third-world nation likely spent hours making it and only earned pennies. Experts who’ve looked into the automation and its role in continued income inequality say people in developing countries are at a higher risk of having their jobs threatened by automation than those in places like the United States and the United Kingdom. For example, their research indicates automation technologies could perform 85 percent of the jobs in Ethiopia by 2030. In contrast, some estimates say that’s true for only 35 percent of U.S. jobs. Others believe the effect may be even less in the United States. Content from McKinsey featuring transcribed interviews from New York’s Digital Future of Work Summit reveals a generally positive outlook about automation’s effect on jobs. Susan Lund, one of the summit’s organizers, brought up how only five percent of jobs can be entirely automated. She believes automation will affect all employees, but flexibility will be advantageous during that shift. Altering the Work People Do Researchers know productivity should rise as a result of AI and automation. When those technologies become well-utilized, they could aid in faster data processing with fewer errors. Intelligent document capture platforms analyze forms and pull information from them, saving humans from data entry work and increasing companies’ returns on investments. However, emerging technologies don’t typically take people completely out of the equation. Instead, there are increasing incidences of humans working alongside automated machines and other kinds of AI technologies. Perhaps automation could eliminate repetitive strain injuries that often lead to days away from work. Data collected in 2016 from the U.S. Bureau of Labor Statistics found that overexertion was a leading cause of injuries and illnesses that forced employees to take time off from multiple roles, including those in office support and repair and maintenance positions. Statistics from Gartner mention that AI will create more jobs than it takes away, leading to millions of new opportunities for individuals ready to diversify their skills. Soon, we could see that AI is impacting the global economy by giving people chances to get involved with jobs that are less physically or mentally intensive and more meaningful. Also, a 2016 study found 39 percent of manufacturers planned to “definitely” devote a significant amount of research and development funding to robotics over the following 12-24 months, while 30 percent said the same about artificial intelligence. Those entities recognize technology as fuel for growth. Agile employees have an increased likelihood of finding work during this technological evolution compared to those who don’t adapt to changes. AI’s Potential Impact on Education Ludger Wössmann, a professor of economics at the University of Munich, authored an in-depth paper asserting how empirical research proves education is a significant driver of economic growth. He also believes student achievement is a primary factor instead of the period a person attended classes. People have weighed in with opinions about how AI could impact education. It could help more people to attend classes and ensure that the educations they get are among the best available. Automated systems might speed up the grading process, giving teachers more time for face-to-face interactions with students. Furthermore, AI could open up affordable possibilities for people to learn outside of traditional classrooms and use customized curriculums. If so, education should become more accessible and help economies thrive, even in nations that previously viewed learning as a luxury — provided the technology is available there. AI and automation will undoubtedly cause notable changes in the world’s economies, and research indicates that many will be positive. Workers who are prepared for what’s ahead and willing to keep open minds about new ways of earning their livings should be in optimal positions for beneficial outcomes rather than getting left behind. By Kayla Matthews Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life. Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website.
<urn:uuid:b3ff87b7-1d5e-483d-bc1f-477b530940ea>
CC-MAIN-2022-40
https://cloudtweaks.com/2018/03/how-will-ai-and-automation-global-economy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00404.warc.gz
en
0.943017
1,072
2.953125
3
High-speed broadband has become a necessity for most Americans over the last decade. Students rely on broadband for research and homework. Businesses use broadband to offer their products and services. Americans in general use broadband to connect to loved ones, watch their favorite TV shows, and for a variety of other activities. Broadband has become so entwined in the lives of so many Americans, that most people don't realize there are still parts of the country that struggle to get even the slowest of internet speeds. What happens to those that live in the broadband age but don't have access? Simply put, they struggle to keep up! An article in the Wall Street Journal entitled, “The People Left Behind in a Broadband World” points out, that living without broadband can be daunting. The article tells the story of how rural, southeastern Ohio has limited broadband resources. One young girl must travel 30 minutes to do her homework at the university library where her mother works. In another town, a local dairy farm struggles to get their name out and find new business without broadband. These are the stories of real Americans that are being hurt by the broadband divide. Getting Broadband to Rural America The main issue with delivering broadband to rural America is economics. In rural areas of the country, people are spread out. The return on investment (ROI) takes longer to realize. Wireless internet service providers (WISPs) play a critical role here, as their fixed wireless solutions are more economical than fiber, yet still provide fiber-like speeds. However, most fixed wireless technology is strictly line-of-sight (LOS) and cannot be deployed in foliage. TVWS Offers an Answer TV White Space (TVWS) is the spectrum between 470 MHz - 790 MHz. You may recall when all broadcast TV turned to digital in 2009, this was to free up the spectrum for broadband applications. TVWS is perfect for providing internet to customers in rural areas like southeastern Ohio. It’s low frequency and high power equipment burns through foliage and covers longer distances. TVWS is not a new technology, however, there have been advancements in the equipment that have renewed interest among WISPs. Matt, Owner and CEO of a Pennsylvania WISP, shares his perspective on the new TVWS technology: “When TVWS equipment was initially introduced, there was potential but no mature product offering. Huge progress has been made since then, and the equipment is available for a reasonable price now. For a very dense foliage, non-line-of-sight (NLOS) scenario, we are currently getting very good performance at 1 mile. We’re seeing 40 Mbps throughput in a 10 meg channel. Compared to 900 MHz where we can run into interference, making for a mediocre link, utilizing the lower end of TVWS spectrum (450 - 500 MHz) is interference free.” The new developments in TVWS technology might be the answer to solving the broadband gap for rural areas like southeastern Ohio. As WISPs adopt and deploy TVWS, more Americans in rural areas are finally getting access to broadband they need. Chapter 4 of our WISP Guide 2019 covers TVWS at length, including more examples of WISPs deploying TVWS.
<urn:uuid:781b4635-0ead-4874-95ff-cf9e0556c651>
CC-MAIN-2022-40
https://blog.doubleradius.com/connecting-rural-america-with-tv-white-space
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00404.warc.gz
en
0.954803
665
2.59375
3
When Middlebury College students saw clouds in the future of their climate, they put together a plan to reduce carbon emissions on campus. “We had a class that was asked to come up with a goal of how this could be achieved [in 2006],” says Jack Byrne, Director of the Office of Sustainability Integration at Middlebury College. “They came up with a portfolio of projects on how to achieve this. They had a good report of recommendations on how to heat campus and power electricity.” One year and $12 million later, Middlebury College started its biomass boiler project. Byrne says the biomass boiler ignites a gasification system, which runs on 20 to 30 thousand tons of wood chips. This system, which ultimately emits water vapor through an existing smoke stack, heats campus and provides electricity. Middlebury College Explains How the Biomass Boiler Works -Wood chips are superheated in a low oxygen chamber where they smolder and emit wood -Oxygen is then introduced on the backside of the boiler causing the gas to ignite, producing heat (at temps over 1100° F) to make steam thatis distributed throughout campusfor heating, cooling, hot water and cooking. -Exhaust from this process circulates through a cyclone separator, forcing larger particles to drop out. -The exhaust then enters the bag house where it passes through a series of filters to remove fine particulate matter. The filtration system in Middle- bury’s biomass plant is rated to remove 99.7 percent of particulates, so most of what one sees coming from the smoke stack is water vapor. Click through Middlebury College’s campus to see which buildings run on the biomass “The plant itself is on an existing central system,” he says. “We have steam distribution center that fuels campus. Once the wood is gasified, it makes steam, becomes pressurized, and goes through electric turbines to power 15 to 20 percent of campus electricity. “Then, the steam [circulates] around campus, provides heat, and comes back again as water.” Prior to going with gasification, Byrne says Middlebury heavily relied on fossil fuels to keep campus powered up. “We were using 20 million gallons of oil per year [before the boiler],” he says. “Since then, we’ve displaced half of that, using only 1.3 million gallons of oil.” Byrne also says that the front wall of the boiler room is glass, so it can be viewed by visitors by day, and its glowing wires can be seen at night. “We made this visible so people can see it and ask about it,” he says. “We’re reminded we made this change in how we got our energy and we can impress on everyone that we still pan to be conservative and energy efficient. “Other colleges are looking to make this switch, and we are a valuable resources to others.” With the biomass boiler, Byrne says that Middlebury College aims to be carbon neutral by 2016. “We’re paying attention to how much we need to address our own renewable energy needs,” he says. Across New England, Stonehill College is also paying attention to its “green” needs. Early this year, Stonehill completed its 15 acre solar farm, which is located across from its campus at the David Ames Clock Farm. Prior to going solar, Stonehill used 15,974,455 kilowatts per hour in electricity back in FY13. As a result, the college paid $2,002,551 for electricity alone. However, the solar field is expected to save Stonehill over $185,000 per year, and up to $3.2 million over the course of 15 years. And, with 9,152 solar panels, the solar farm provides up to 20 percent of the campus’s electricity. Craig Binney, the associate vice president for finance and operation at Stonehill says that up until ten years ago, the prospect of a solar farm was just talk.
<urn:uuid:2a6c891c-28c1-4f7a-ac6c-a584de1d2e45>
CC-MAIN-2022-40
https://mytechdecisions.com/facility/3-ways-it-pays-to-go-green-in-higher-education/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00404.warc.gz
en
0.950551
905
3.578125
4
Multicast is a popular feature used mainly by IP-networks of Enterprise customers. Multicast allows the efficient distribution of information between a single multicast source and multiple receivers. An example of a multicast source in a corporate network would be a financial information server provided by a third-party company such as Bloomberg's or Reuters. The receivers would be individual PCs scattered around the network all receiving the same financial information from the server. The multicast feature allows a single stream of information to be transmitted from a source device, regardless of how many receivers are active for the information from that source device. The routers automatically replicate a single copy of the stream to each interface where multicast receivers can be reached. Therefore, multicast significantly reduces the amount of traffic required to distribute information to many interested parties. This chapter describes in detail how an MPLS VPN service provider can provide multicast services between multiple sites of a customer VPN that has an existing multicast network or is intending to deploy the multicast feature within their network. This feature is known as multicast VPN (mVPN) and is available from Cisco IOS 12.2(13)T onward. This chapter includes an introduction to general IP Multicast concepts, an overall description of the mVPN feature and architecture, a detailed description of each IP Multicast component modified to support the mVPN feature, and a case study that shows how you can implement mVPN in an MPLS VPN backbone. Introduction to IP Multicast IP multicast is an efficient mechanism for transmitting data from a single source to many receivers in a network. The destination address of a multicast packet is always a multicast group address. This address comes from the IANA block 224.0.0.02322.214.171.124. (Before the concept of classless interdomain routing, or CIDR, existed, this range was referred to as the D-class.) A source transmits a multicast packet by using a multicast group address, while many receivers "listen" for traffic from that same group address. Examples of applications that would use multicast are audio/video services such as IPTV, Windows Media Player, conferencing services such as NetMeeting or stock tickers, and financial information such as those that TIBCO and Reuters provide. If you want to gain a more complete or detailed understanding of IP multicast, then read the Cisco Press book titled Developing IP Multicast Networks (ISBN 1-57870-077-9) or any other book that provides an overview of multicast technologies. You can obtain further information on advanced multicast topics from http://www.cisco.com/go/ipmulticast. Multicast packets are forwarded through the network by using a multicast distribution tree. The network is responsible for replicating the same packet at each bifurcation point (the point at which the branches fork) in the tree. This means that only one copy of the packet travels over any particular link in the network, making multicast trees extremely efficient for distributing the same information to many receivers. There are two types of distribution trees: source trees and shared trees. A source tree is the simplest form of distribution tree. The source host of the multicast traffic is located at the root of the tree, and the receivers are located at the ends of the branches. Multicast traffic travels from the source host down the tree toward the receivers. The forwarding decision on which interface a multicast packet should be transmitted out is based on the multicast forwarding table. This table consists of a series of multicast state entries that are cached in the router. State entries for a source tree use the notation (S, G) pronounced S comma G. The letter S represents the IP address of the source, and G represents the group address. The notion of direction is used for packets that are traveling along a distribution tree. When a packet travels from a source (or root) toward a receiver, it is deemed to be traveling down the tree. If a packet is traveling from the receiver toward the source (such as a control packet), it is deemed to be traveling up the tree. A source tree is depicted in Figure 7-1. The host 126.96.36.199 at the root of the tree is transmitting multicast packets to the destination group 188.8.131.52, of which there are two interested receivers. The forwarding cache entry for this multicast stream is (184.108.40.206, 220.127.116.11). A source tree implies that the route between the multicast source and receivers is the shortest available path; therefore, source trees are also referred to as shortest path trees (SPTs). A separate source tree exists for every source that is transmitting multicast packets, even if those sources are transmitting data to the same group. This means that there will be an (S, G) forwarding state entry for every active source in the network. Referring to our earlier example, if another source, such as 18.104.22.168, became active that was also transmitting to group 22.214.171.124, then an additional state entry (and a different SPT) would be created as (126.96.36.199, 188.8.131.52). Therefore, source trees or SPTs provide optimal routing at the cost of additional multicast state information in the network. Figure 7-1 Source Distribution Tree The important thing to remember about source trees is that the receiving end can only join the source tree if it has knowledge of the IP address of the source that is transmitting the group in which it is interested. In other words, to join a source tree, an explicit (S, G) join must be issued from the receiving end. (This explicit [S, G] join is issued by the last hop router, not the receiving host. The receiving host makes the last hop router aware that it wants to receive data from a particular group, and the last hop router figures out the rest.) Shared trees differ from source trees in that the root of the tree is a common point somewhere in the network. This common point is referred to as the rendezvous point (RP). The RP is the point at which receivers join to learn of active sources. Multicast sources must transmit their traffic to the RP. When receivers join a multicast group on a shared tree, the root of the tree is always the RP, and multicast traffic is transmitted from the RP down toward the receivers. Therefore, the RP acts as a go-between for the sources and receivers. An RP can be the root for all multicast groups in the network, or different ranges of multicast groups can be associated with different RPs. Multicast forwarding entries for a shared tree use the notation (*, G), which is pronounced star comma G. This is because all sources for a particular group share the same tree. (The multicast groups go to the same RP.) Therefore, the * or wildcard represents all sources. A shared tree is depicted in Figure 7-2. In this example, multicast traffic from the source host 184.108.40.206 and 220.127.116.11 travel to the RP and then down the tree toward the two receivers. There are two routing entries, one for each of the multicast groups that share the tree: (*, 18.104.22.168) and (*, 22.214.171.124). In a shared tree, if more sources become active for either of these two groups, there will still be only two routing entries due to the wildcard representing all sources for that group. Figure 7-2 Shared Distribution Tree Shared trees are not as optimal in their routing as source trees because all traffic from sources must travel to the RP and then follow the same (*, G) path to receivers. However, the amount of multicast routing state information required is less than that of a source tree. Therefore, there is a trade-off between optimal routing versus the amount of state information that must be kept. Shared trees allow the receiving end to obtain data from a multicast group without having to know the IP address of the source. The only IP address that needs to be known is that of the RP. This can be configured statically on each router or learned dynamically by mechanisms such as Auto-RP or Bootstrap Router (BSR). Shared trees can be categorized into two types: unidirectional and bidirectional. Unidirectional trees are essentially what has already been discussed; sources transmit to the RP, which then forwards the multicast traffic down the tree toward the receivers. In a bidirectional shared tree, multicast traffic can travel up and down the tree to reach receivers. Bidirectional shared trees are useful in an any-to-any environment, where many sources and receivers are evenly distributed throughout the network. Figure 7-3 shows a bidirectional tree. Source 126.96.36.199 is transmitting to two receivers A and B for group 188.8.131.52. The multicast traffic from the source host is forwarded in both directions as follows: Up the tree toward the root (RP). When the traffic arrives at the RP, it is then transmitted down the tree toward receiver A. Down the tree toward receiver B. (It does not need to pass the RP.) Bidirectional trees offer improved routing optimality over unidirectional shared trees by being able to forward data in both directions while retaining a minimum amount of state information. (Remember, state information refers to the amount of (S, G) or (*, G) entries that a router must hold.) Figure 7-3 Bidirectional Shared Tree Packet forwarding in a router can be divided into two types: unicast forwarding and multicast forwarding. The difference between unicast forwarding and multicast forwarding can be summarized as follows: Unicast forwarding is concerned with where the packet is going. Multicast forwarding is concerned with where the packet came from. In unicast routing, the forwarding decision is based on the destination address of the packet. At each router along the path, you can derive the next-hop for the destination by finding the longest match entry for that destination in the unicast routing table. The unicast packet is then forwarded out the interface that is associated with the next-hop. Forwarding of multicast packets cannot be done in the same manner because the destination is a multicast group address that you will most likely need to forward out multiple interfaces. Multicast group addresses do not appear in the unicast routing table; therefore, forwarding of multicast packets requires a different process. This process is called Reverse Path Forwarding (RPF), and it is the basis for forwarding multicast packets in most multicast routing protocols. In particular, RPF is used with Protocol Independent Multicast (PIM), which is the protocol used and described throughout this chapter. Every multicast packet received on an interface at a router is subject to an RPF check. The RPF check determines whether the packet is forwarded or dropped and prevents looping of packets in the network. RPF operates like this: When a multicast packet arrives at the router, the source address of that packet is checked to make sure that the incoming interface indeed leads back to the source. (In other words, it is on the reverse path.) If the check passes, then the multicast packet is forwarded out the relevant interfaces (but not the RPF interface). If the RPF check fails, the packet is discarded. The interface used for the RPF check is referred to as the RPF interface. The way that this interface is determined depends on the multicast routing protocol that is in use. This chapter is concerned only with PIM, which is the most widely used protocol in Enterprise networks. PIM is discussed in the next section. PIM uses the information in the unicast routing table to determine the RPF interface. Figure 7-4 shows the process of an RPF check for a packet that arrives on the wrong interface. A multicast packet from the source 184.108.40.206 arrives on interface S0. A check of the multicast routing table shows that network 220.127.116.11 is reachable on interface S1, not S0; therefore, the RPF check fails and the packet is dropped. Figure 7-4 RPF Check Fails Figure 7-5 shows the RPF check for a multicast packet that arrives on the correct interface. The multicast packet with source arrives on interface S1, which matches the interface for this network in the unicast routing table. Therefore, the RPF check passes and the multicast packet is replicated to the interfaces in the outgoing interface list (called the olist) for the multicast group. Figure 7-5 RPF Check Succeeds If the RPF check has to refer to the unicast routing table for each arriving multicast packet, this will have a detrimental affect on router performance. Instead, the RPF interface is cached as part of the (S, G) or (*, G) multicast forwarding entry. When the multicast forwarding entry is created, the RPF interface is set to the interface that leads to the source network in the unicast routing table. If the unicast routing table changes, then the RPF interface is updated automatically to reflect the change. Example 7-1 shows a multicast forwarding entry for (18.104.22.168, 22.214.171.124). You can also refer to this entry as a multicast routing table entry. The presence of the source in the (S, G) notation indicates that this entry is associated with a source tree or shortest path tree. The incoming interface is the RPF interface, which has been set to POS3/0. This setting matches the next-hop interface shown in the OSPF routing entry for the source 126.96.36.199. There are two interfaces in the outgoing olist: Serial4/0 and Serial4/2. The outgoing interface list provides the interfaces that the multicast packet should be replicated out. Therefore, packets that pass the RPF check from source 188.8.131.52 (they must come in on POS3/0) that are destined to group 184.108.40.206 are replicated out interface Serial4/0 and Serial4/2. Example 7-1 Source Tree Multicast Forwarding Entry (220.127.116.11, 18.104.22.168), 00:03:30/00:03:27, flags: sT Incoming interface: POS3/0, RPF nbr 22.214.171.124 Outgoing interface list: Serial4/0, Forward/Sparse-Dense, 00:03:30/00:02:55 Serial4/2, Forward/Sparse-Dense, 00:02:45/00:02:05 Routing entry for 126.96.36.199/32 Known via "ospf 1", distance 110, metric 2, type intra area Last update from 188.8.131.52 on POS3/0, 1w5d ago Routing Descriptor Blocks: * 184.108.40.206, from 220.127.116.11, 1w5d ago, via POS3/0 Route metric is 2, traffic share count is 1 For completeness, a shared tree routing entry is shown in Example 7-2. This entry represents all sources transmitting to group 18.104.22.168. The RPF interface is shown to be FastEthernet0/1, which is the next-hop interface to the RP 22.214.171.124. Remember that the root of a shared tree are always the RP; therefore, the RPF interface for a shared tree is the reverse path back to the RP. Example 7-2 Shared Tree Multicast Forwarding Entry (*, 126.96.36.199), 2w5d/00:00:00, RP 188.8.131.52, flags: SJCL Incoming interface: FastEthernet0/1, RPF nbr 192.168.2.34 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 00:03:29/00:02:54 The outgoing interface lists in the preceding examples are determined by the particular multicast protocol in use. Over the years, various multicast protocols have been developed, such as Distance Vector Multicast Routing Protocol (DVMRP), Multicast Open Shortest Path First (MOSPF), and Core Base Trees (CBT). The characteristic that these protocols have in common is that they create a multicast routing table based on their own discovery mechanisms. The RPF check does not use the information already available in the unicast routing table. The protocol that is the most widely deployed and relevant to this chapter is PIM. As discussed previously, PIM uses the unicast routing table to discover whether the multicast packet has arrived on the correct interface. The RPF check is independent because it does not rely on a particular protocol; it bases its decisions on the contents of the unicast routing table. Several PIM modes are available: dense mode (PIM DM), sparse mode (PIM SM), Bidirectional PIM (PIM Bi-Dir), and a recent addition known as Source Specific Multicast (SSM). The deployment of PIM DM is diminishing because it has been proven to be inefficient in comparison to PIM SM. PIM DM is based on the assumption that for every subnet in the network, at least one receiver exists for every (S, G) multicast stream. Therefore, all multicast packets are pushed or flooded to every part of the network. Routers that do not want to receive the multicast traffic because they do not have a receiver for that (S, G) send a prune message back up the tree. Branches that do not have receivers are pruned off, the result being a source distribution tree with branches that have receivers. Periodically, the prune message times out, and multicast traffic begins to flood through the network again until another prune is received. PIM SM is more efficient than PIM DM in that it does not use flooding to distribute traffic. PIM SM employs the pull model, in which traffic is distributed only where is it requested. Multicast traffic is distributed to a branch only if an explicit join message has been received for that multicast group. Initially, receivers in a PIM SM network join the shared tree (rooted at the RP). If the traffic on the shared tree reaches a certain bandwidth threshold, the last hop router (that is, the one to which the receiver is connected) can choose to join a shortest-path tree to the source. This puts the receiver on a more optimal path to the source. PIM Bi-Dir creates a two-way forwarding tree, as shown in Figure 7-3. All multicast routing entries for bidirectional groups are on a (*, G) shared tree. Because traffic can travel in both directions, the amount of state information is kept to a minimum. Routing optimality is improved because traffic does not have to travel unnecessarily toward the RP. Source trees are never built for bidirectional multicast groups. Bidirectional trees in the service provider network are covered in the section "Case Study of mVPN Operation in SuperCom" later in this chapter. SSM implies that the IP address of the source for a particular group is known before a join is issued. SSM in Cisco IOS is implemented in addition to PIM SM and co-exists with IP Multicast networks based on PIM SM. SSM always builds a source tree between the receivers and the source. The source is learned through an out-of-band mechanism. Because the source is known, an explicit (S, G) join can be issued for the source tree that obviates the need for shared trees and RPs. Because no RPs are required, optimal routing is assured; traffic travels the most direct path between source and receiver. SSM is a recent innovation in multicast networks and is recommended for new deployments, particularly in the service provider core for an mVPN environment. A practical deployment of SSM is discussed in the section, "Case Study of mVPN Operation in SuperCom" later in this chapter. Multicast is a powerful feature that allows the efficient one-to-many distribution of information. Multicast uses the concept of distribution trees, where the source is the root of the tree and the receivers are at the leaves of the tree. The routers replicate packets at each branch of the tree, known as the bifurcation point. The tree is represented as a series of multicast state entries in each router, and packets are forwarded down this tree (toward the leaves) by using RPF. There are various modes of multicast operation in networks with the most popular one being PIM SM.
<urn:uuid:bd90402a-f5e4-48d7-8b0c-b5e2921afdf7>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=32100&amp;seqNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00404.warc.gz
en
0.929953
4,430
3.09375
3
Macros are small pieces of code that can be embedded (or hidden) into a document - most often found in Microsoft Word and Excel documents. Attackers often embed macros into documents that, once opened, download new malware/ransomware and infect your network. There are legitimate uses for Macros. However, most individuals and businesses do not have a legitimate use for macros and never notice when they are disabled. Below you will learn how to disable Macros in Microsoft Office. Open Microsoft Word, and click "Options" 2. In the resulting screen, click "Trust Center" on the left, then select the "Trust Center Settings" button on the right. 3. Next, select "Macro Settings" in the left screen. Then, in the right pane, select "Disable all macros with warning" and Click "OK". 4. Repeat this process in Excel That's all there is to it! Reach out to us if you need further help!
<urn:uuid:93670e1b-3657-4d01-bd11-9c85cf44d7df>
CC-MAIN-2022-40
https://help.coalitioninc.com/en/articles/3823821-disabling-macros-in-microsoft-office
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00404.warc.gz
en
0.897436
200
2.78125
3
MPLS Networks Simplified What is an MPLS Network? MPLS (Multi-Protocol Label Switching) is a mechanism in high-performance telcom networks that directs data from one place on the network to another based on short path labels rather than long network addresses. MPLS is highly scalable and protocol agnostic. In an MPLS network, packets of data are assigned labels, and all packet-forwarding decisions are made solely on the contents of these labels, eliminating the need to examine the packets themselves. As a result, end-to-end circuits can be created across any type of transport medium, using any protocol. At Eze Castle, we like to boast that our private cloud services are delivered via an MPLS network which connects our data centers. That sounds good, but what are the real benefits of this type of network infrastructure? We asked our vice president of networking services, Mike Abbey, for some insights. Here’s what we learned: What are the advantages of an MPLS Network? Flexibility: MPLS allows network operators to have greater flexibility in terms of re-routing traffic should an issue occur. For instance, if there is a bottleneck or sudden link failure in the network, the operator can divert the traffic flow to avoid it. Scalability: For firms with a number of geographically diverse branches, or those who are expecting to open new offices as the company expands, MPLS networks are extremely cost effective. Each new location simply requires one MPLS link which can be quickly added (or removed should an office relocate or close). Redundancy & Disaster Recovery: MPLS-based services enable data centers to be linked via multiple redundant connections and allow remote sites to be efficiently reconnected to backup locations if necessary. As a result, firms can seamlessly resume operations and maintain access to crucial applications and data in the event of an outage. Additionally, MPLS networks are designed to overcome minor issues or failed connections. Data can be easily re-routed through the next optimum path to avoid harmful downtime. MPLS enables ECINet, Eze Castle’s carrier class network, to offer next-generation intelligent networks that deliver a wide variety of advanced, value-added services over a single infrastructure. Clients on this network can take advantage of Internet services (IPv4 and IPv6), Layer 3 VPNs, Layer 2 VPNs (E-Line and E-LAN) with Quality of Service available on the VPN products. To learn more, contact an Eze Castle Integration expert.
<urn:uuid:29c3afc5-738d-49c0-9d99-8b0af735155d>
CC-MAIN-2022-40
https://www.eci.com/blog/329-mpls-networks-simplified.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00404.warc.gz
en
0.9171
532
2.625
3
The network effect is a curious phenomenon that describes the group’s benefit that ensues as each new member joins and begins using the same product or service. It applies to several industries and ideas, and can be found in examples like eBay, the popular auction site. As more people join eBay to find great prices on used goods, there are more people to bid, potentially driving the price of individual listings higher. This alone is little help to the site’s buyers, but as sellers are seen generating more money, new sellers are attracted to the platform and effectively balance the ecosystem as it grows. With a product or service that many use, adding new members to the network is usually beneficial. However, the network effect isn’t necessarily as effective as it could be. It can be slow to gain momentum, and even in the finest examples of network effect that exist, group benefits can be compromised by the network’s own infrastructure. Blockchain’s decentralized network technology is quickly changing this notion considering it relies on the network effect to proliferate and grow. The Blockchain Booster Effect Blockchain is a groundbreaking technological architecture that runs on the back of a peer-to-peer network, therefore enabling complete decentralization. To provide the processing power necessary to host services on the blockchain, peers are incentivized to work together democratically, and can collectively fill the role of a centralized server or some other type of authority. These conditions are ripe for the network effect, which sprouts up with vigor when blockchain is involved. One of the biggest flaws of the network effect is how slow it is to manifest. Part of this sluggishness is due to the absence of any direct financial benefit for new users or for existing users. Social networks are a pertinent example. New users might want to join and avoid missing out on a fun platform, and their membership adds to their friends’ experiences by increasing relevant content. However, the chance to participate is not a great incentive by itself. Moreover, a network like Facebook has more to gain from new users than its users themselves. Blockchain’s decentralization makes it immune to control by any single entity, so any financial benefits stemming from a growing user population are passed onto users directly. It’s able to successfully remain decentralized due to cryptocurrency, which is a powerful incentive for any blockchain network peer, or user. Most blockchain platforms incorporate cryptocurrencies or tokens because they must offer some reward to peers whose computers are working to process and verify transmissions of data. Network Effect in Action With tangible incentives in the form of cryptocurrency, and equitable distribution of benefits to users rather than centralized authorities, blockchain is helping businesses take advantage of the network effect while concurrently realizing better results. A relatively basic application of the network effect is seen in referral programs, for example, which will commonly give away some type of reward to those parties which bring in new users. Referral systems employed by companies that want to motivate user-generated growth don’t typically use very sophisticated technology. The rather shallow software options available cannot track complicated multi-step referral chains across more than two parties, which limits their efficacy severely. It’s also relatively easy to game basic referral systems by providing minimum value intentionally, which amounts to a sort of referral fraud. Companies like 2key are combatting this reality by building an easily integrated referral platform on blockchain. The company’s referral links are logged seamlessly on the shared ledger and feature an embedded cryptographic signature that can be tracked across an unlimited number of referrers and users. It also promotes a healthier network by punishing spammers and disincentivizing bad referrals by docking users tokens. 2Key is not alone in tapping into blockchain’s network effect, with other marketing firms developing similar platforms. Vyral, for instance, is working on a network for users to share leads and creating a network that will allow merchants and companies to better reach the right consumers. Users will be able to purchase and sell leads on the company’s marketplace in a peer-to-peer model, without having to resort to costly agencies. ReferralCoin is a token specifically created to enhance the current referral market. The Referral token will be used both for back-end functions such as paying for referrals and marketing, as well as front-end needs like rewarding users and trading for services. Some, like Ponder, have even narrowed the concept to a specific industry like online dating. The company’s referral system rewards users for matching others successfully, creating a strong network effect for the ecosystem. The Network Engine The network effect is also present in blockchain solutions that seek to replace traditional counterparts, instead of improving them. Blockchain-based cloud storage solutions like Storj operate via the network effect exclusively and represent one of its most impressive specimens. Storj users contribute their own computers’ spare memory to the network and are rewarded in coins that they can sell or use to rent anonymous, decentralized storage. As more users join, memory gets cheaper, but simultaneously more secure. A similar concept comes courtesy of IOTA, a blockchain platform that seeks to use the network effect to improve the Internet of Things (IoT). IoT is an older concept, but one that stands to benefit immensely from greater transparency and faster communication. With IoT devices seamlessly connected via IOTA’s ledger, they can react more accurately to changing environmental cues and each other. If a swarm of IoT devices represents a brain, then before IOTA, it was only capable of simple logic. Using the blockchain network effect, however, strengthens its neural connections and lends it the ability to think more critically. Blockchain is itself a product of the network effect and wouldn’t survive without it. Though some have likened cryptocurrency to a pyramid scheme, what they forget to consider is that there can be no top nor any bottom to a blockchain-based system. Blockchains using the network effect to grow expand outwards in every direction at once, and as they mature, every single user feels the benefits in proportion to their level of input. This is already inevitably leading to a plethora or more equitable services, and the private sector is just now waking up to this new reality.
<urn:uuid:688de4b5-56bc-4370-bb25-ce50cac4465c>
CC-MAIN-2022-40
https://dataconomy.com/2018/06/how-blockchain-can-supercharge-the-network-effect/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00404.warc.gz
en
0.942596
1,260
2.921875
3
Any network engineer knows that when it comes to troubleshooting network problems and monitoring network performance, there are a variety of tools to help. Different tools play different parts in the troubleshooting process, so it’s important to understand what they are and how to use them. Traceroute is one of the most widely used tools that network engineers and IT geeks use to troubleshoot networks. First invented in 1987, Traceroutes are still frequently used by network engineers and IT pros today. Keep reading to learn more about Traceroutes and the role they play in monitoring network performance and troubleshooting intermittent network problems. What Are Traceroutes? Traceroutes were first invented in 1987, but are still considered the most commonly used tool to troubleshoot network issues even today. As suggested by the name, a traceroute traces the IP route from a reference source to a destination inside of an IP network. It collects data with intent to show users the routers and round-trip latency from the source to each of the routers. Traceroutes functions by using an 8-bit field in the IP Header, which is known as Time-to-live (TTL). The traceroute software uses TTL to discover the routers between a source and its destination. There are many different traceroute tools on the market, but when it comes to finding and fixing network problems with traceroutes, deploying an end-to-end network monitoring software with traceroute capabilities will give you a more comprehensive view of network performance to help you troubleshoot network problems faster. Obkio’s Network Performance Monitoring Software has a Live Traceroute feature, which is used in combination with the network monitoring sessions. Live Traceroutes help users in comouting the forward and the reverse traceroutes with latencies and packet loss in real-time. It’s the perfect tool to Zero-in the location of network performance issues. Share traceroute results with your team and 3rd parties like IT consultants or service providers etc. With access to the live feature of traceroute, everyone will be able to troubleshoot network issues as soon as possible. Locate Network Issues Using Traceroutes You can identify network issues with traceroutes by analyzing two metrics for each hop or router you’re monitoring: latency and packet loss. The latency denotes the time difference between the time when a packet was sent and when a response is received. Packet loss refers to percentage of packets which were sent but never received a response out of the total number of sent packets. Traceroutes monitor both of these important metrics to measure how long it takes for data to travel across a network, and if all the data has actually been transmitted. If the latency is poor, and there is a high percentage of packet loss, a network problem is surely to blame – and a traceroute will tell you where that problem is located. Why do Routers Drop Packets or Have experience Latencies? There are multiple reasons behind a single router dropping traceroute packets or having higher latencies, and it doesn’t necessarily point to any network performance degradation. As a general rule of thumb, while looking at packet loss with reference to Traceroute is: if the packet loss doesn’t continue with the following hops, then it’s not a network issue. With a software like Obkio, you can continuously run traceroutes in real-time to easily monitor if packet loss is recurrent, and if you have a real problem. Hidden Information in Traceroute DNS DNS, or The hostname of the traceroute hops can provide a lot of information about the actual path of data from the source to the destination within a network. There are four key bits of information that you can decode from Traceroute DNS: - ISP operating the router - The city where the router is located - The router name, number, or unique id - The ingress interface or port by which the traceroute packet came on the router With this information at hand, it helps IT pros and network administrators catch and troubleshoot network problems before they’re felt by end-users. How to Catch Reverse Path Issues When looking at a traceroute, always remember that traffic on the Internet is asymmetrical most of the time. This phenomenon is called the Hot Potato Routing. To help you troubleshoot network issues with more detail and accuracy, traceroutes give you data from sources and destinations that are in the same ISP, which gives you a reverse traceroute to compare your data against and catch reverse path issues Share Traceroutes With Your ISP We’ve all had to play the back-and-forth blame game with our service provider at some point. No one wants to admit that a problem is on their end. But with traceroutes, you can easily see where a problem is located and who is responsible for fixing it. Whether a network problem is located in your ISP’s network or somewhere else on the Internet, reach out to your ISP’s NOC (Network Operation Center) to help you troubleshoot faster. A tool like Obkio’s traceroute tool allows you to share a traceroute with your ISP so they get all the data they need to help you troubleshoot, including: - IP addresses of the Source and the Destination - A traceroute from Source to Destination - A traceroute from Destination to Source - Historical traceroutes where everything is running fine (if you have them) - A way to replicate the issue (more on that later!) Load Balancing with Traceroutes To increase capacity to transmit information between routers, many IT specialists choose to add more multiple connections between them, to transmit more data. If at any point of time a router does not support higher speed interfaces, in that case only pragmatic solution to support a higher capacity would be to aggregate two or more than two ports together. There are commonly two possible configurations scenarios that allow you to set up multiple connections between routers: the Link Aggregation and the Equal Cost Multi Path (ECMP). For the more accurate data, you need a traceroute software that allows you to choose which ports to use. Therefore, you can use ICMP to have an easy to read traceroute or use TCP (or UDP) with random ports to see the full paths between the source and the destination. Traceroutes within MPLS Networks Service providers (SPs) and large enterprises use MPLS (Multiprotocol Label Switching) networks to better segment and manage their networks. There are two aspects specific to MPLS networks that affect traditional IP traceroutes: ICMP Tunneling and TTL Propagation. ICMP Tunneling causes the latency and packet loss to be different even if the network path is the same. So latency may take a big jump and then stay the same for hops that are far away from each other. TTL propagation makes it so each time a traceroute reaches a router, it is decremented by one. When TTL propagation is disabled, some routers are not visible in the traceroute. MPLS networks change the way we look at traceroutes without giving us the exact picture on what is going on, so it’s important to understand how they can alter the data. Traceroutes are an extremely useful tool to help you troubleshoot network problems – which is why they’ve been around for so long! As an advanced tool, it’s important to understand how to use traceroutes and when, to be able to fully leverage the information they provide. Paired with a complete end-to-end network monitoring software, traceroutes help you get complete visibility over your network health and any problems that may be affecting network performance. They help you troubleshoot faster, so you can spend more time getting things done, and less time on damage control.
<urn:uuid:3e581e63-3d7f-4ed4-ae6c-c08f6962d94c>
CC-MAIN-2022-40
https://networkinterview.com/network-troubleshooting-using-traceroutes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00404.warc.gz
en
0.930754
1,699
2.859375
3
Upon completing this course, you will have learned how to: – Understand the General Data Protection Regulation or GDPR, its requirements and its penalties – Recognize the effects of the GDPR on enterprise and vendor environments – Demonstrate GDPR comprehension through real-world scenarios The General Data Protection Regulation, or GDPR, is a set of rules designed to give European Union (EU) citizens more control over their personal data. Compliance with the GDPR is not restricted to organizations with physical locations in the EU; any organization that collects or processes EU citizens’ personal data is subject to the same requirements and penalties as EU-based companies. This course reviews key privacy and data regulation requirements of the GDPR, provides four tests that organizations can use to determine if they must comply, and discusses penalties and fines that may follow noncompliance.
<urn:uuid:f847c3bc-1f12-453d-bc77-b78f63c70e5f>
CC-MAIN-2022-40
https://aotmp.com/product/gdp-introduction-to-gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00404.warc.gz
en
0.920513
170
2.71875
3
Malware evasion techniques are widely used to circumvent detection as well as analysis and understanding. One of the dominant categories of evasion is anti-sandbox detection, simply because today’s sandboxes are becoming the fastest and easiest way to have an overview of the threat. Many companies use these kinds of systems to detonate malicious files and URLs found, to obtain more indicators of compromise to extend their defenses and block other related malicious activity. Nowadays we understand security as a global process, and sandbox systems are part of this ecosystem, and that is why we must take care with the methods used by malware and how we can defeat it. Historically, sandboxes had allowed researchers to visualize the behavior of malware accurately within a short period of time. As the technology evolved over the past few years, malware authors started producing malicious code that delves much deeper into the system to detect the sandboxing environment. As sandboxes became more sophisticated and evolved to defeat the evasion techniques, we observed multiple strains of malware that dramatically changed their tactics to remain a step ahead. In the following sections, we look back on some of the most prevalent sandbox evasion techniques used by malware authors over the past few years and validate the fact that malware families extended their code in parallel to introducing more stealthier techniques. The following diagram shows one of the most prevalent sandbox evasion tricks we will discuss in this blog, although many others exist. Initially, several strains of malware were observed using timing-based evasion techniques [latent execution], which primarily boiled down to delaying the execution of the malicious code for a period using known Windows APIs like NtDelayExecution, CreateWaitTableTImer, SetTimer and others. These techniques remained popular until sandboxes started identifying and mitigating them. As sandboxes identified malware and attempted to defeat it by accelerating code execution, it resorted to using acceleration checks using multiple methods. One of those methods, used by multiple malware families including Win32/Kovter, was using Windows API GetTickCount followed by a code to check if the expected time had elapsed. However, we observed several variations of this method across malware families. This anti-evasion technique could be easily bypassed by the sandbox vendors simply creating a snapshot with more than 20 minutes to have the machine running for more time. Another approach that subsequently became more prevalent, observed with Win32/Cutwail malware, is calling the garbage API in the loop to introduce the delay, dubbed API flooding. Below is the code from the malware that shows this method. We observed how this code resulted in a DOS condition since sandboxes could not handle it well enough. On the other hand, this sort of behavior is not too difficult to detect by more involved sandboxes. As they became more capable of handling the API based stalling code, yet another strategy to achieve a similar objective was to introduce inline assembly code that waited for more than 5 minutes before executing the hostile code. We found this technique in use as well. Sandboxes are now much more capable and armed with code instrumentation and full system emulation capabilities to identify and report the stalling code. This turned out to be a simplistic approach which could sidestep most of the advanced sandboxes. In our observation, the following depicts the growth of the popular timing-based evasion techniques used by malware over the past few years. Another category of evasion tactic widely adopted by malware was fingerprinting the hardware, specifically a check on the total physical memory size, available HD size / type and available CPU cores. These methods became prominent in malware families like Win32/Phorpiex, Win32/Comrerop, Win32/Simda and multiple other prevalent ones. Based on our tracking of their variants, we noticed Windows API DeviceIoControl() was primarily used with specific Control Codes to retrieve the information on Storage type and Storage Size. Ransomware and cryptocurrency mining malware were found to be checking for total available physical memory using a known GlobalMemoryStatusEx () trick. A similar check is shown below. Storage Size check: Illustrated below is an example API interception code implemented in the sandbox that can manipulate the returned storage size. Subsequently, a Windows Management Instrumentation (WMI) based approach became more favored since these calls could not be easily intercepted by the existing sandboxes. Here is our observed growth path in the tracked malware families with respect to the Storage type and size checks. CPU Temperature Check Malware authors are always adding new and interesting methods to bypass sandbox systems. Another check that is quite interesting involves checking the temperature of the processor in execution. A code sample where we saw this in the wild is: The check is executed through a WMI call in the system. This is interesting as the VM systems will never return a result after this call. Popular malware families like Win32/Dyreza were seen using the CPU core count as an evasion strategy. Several malware families were initially found using a trivial API based route, as outlined earlier. However, most malware families later resorted to WMI and stealthier PEB access-based methods. Any evasion code in the malware that does not rely on APIs is challenging to identify in the sandboxing environment and malware authors look to use it more often. Below is a similar check introduced in the earlier strains of malware. There are number of ways to get the CPU core count, though the stealthier way was to access the PEB, which can be achieved by introducing inline assembly code or by using the intrinsic functions. One of the relatively newer techniques to get the CPU core count has been outlined in a blog, here. However, in our observations of the malware using CPU core count to evade automated analysis systems, the following became adopted in the outlined sequence. Another class of infamous techniques malware authors used extensively to circumvent the sandboxing environment was to exploit the fact that automated analysis systems are never manually interacted with by humans. Conventional sandboxes were never designed to emulate user behavior and malware was coded with the ability to determine the discrepancy between the automated and the real systems. Initially, multiple malware families were found to be monitoring for Windows events and halting the execution until they were generated. Below is a snapshot from a Win32/Gataka variant using GetForeGroundWindow and checking if another call to the same API changes the Windows handle. The same technique was found in Locky ransomware variants. Below is another snapshot from the Win32/Sazoora malware, checking for mouse movements, which became a technique widely used by several other families. Malware campaigns were also found deploying a range of techniques to check historical interactions with the infected system. One such campaign, delivering the Dridex malware, extensively used the Auto Execution macro that triggered only when the document was closed. Below is a snapshot of the VB code from one such campaign. The same malware campaign was also found introducing Registry key checks in the code for MRU (Most Recently Used) files to validate historical interactions with the infected machine. Variations in this approach were found doing the same check programmatically as well. MRU check using Registry key: \HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Word\User MRU Programmatic version of the above check: Here is our depiction of how these approaches gained adoption among evasive malware. Another technique used by malware is to fingerprint the target environment, thus exploiting the misconfiguration of the sandbox. At the beginning, tricks such as Red Pill techniques were enough to detect the virtual environment, until sandboxes started to harden their architecture. Malware authors then used new techniques, such as checking the hostname against common sandbox names or the registry to verify the programs installed; a very small number of programs might indicate a fake machine. Other techniques, such as checking the filename to detect if a hash or a keyword (such as malware) is used, have also been implemented as has detecting running processes to spot potential monitoring tools and checking the network address to detect blacklisted ones, such as AV vendors. Locky and Dridex were using tricks such as detecting the network. Using Evasion Techniques in the Delivery Process In the past few years we have observed how the use of sandbox detection and evasion techniques have been increasingly implemented in the delivery mechanism to make detection and analysis harder. Attackers are increasingly likely to add a layer of protection in their infection vectors to avoid burning their payloads. Thus, it is common to find evasion techniques in malicious Word and other weaponized documents. McAfee Advanced Threat Defense McAfee Advanced Threat Defense (ATD) is a sandboxing solution which replicates the sample under analysis in a controlled environment, performing malware detection through advanced Static and Dynamic behavioral analysis. As a sandboxing solution it defeats evasion techniques seen in many of the adversaries. McAfee’s sandboxing technology is armed with multiple advanced capabilities that complement each other to bypass the evasion techniques attempted to the check the presence of virtualized infrastructure, and mimics sandbox environments to behave as real physical machines. The evasion techniques described in this paper, where adversaries widely employ the code or behavior to evade from detection, are bypassed by McAfee Advanced Threat Defense sandbox which includes: - Usage of windows API’s to delay the execution of sample, hard disk size, CPU core numbers and other environment information . - Methods to identify the human interaction through mouse clicks , keyboard strokes , Interactive Message boxes. - Retrieval of hardware information like hard disk size , CPU numbers, hardware vendor check through registry artifacts. - System up time to identify the duration of system alive state. - Check for color bit and resolution of Windows . - Recent documents and files used. In addition to this, McAfee Advanced Threat Defense is equipped with smart static analysis engines as well as machine-learning based algorithms that play a significant detection role when samples detect the virtualized environment and exit without exhibiting malware behavior. One of McAfee’s flagship capability, the Family Classification Engine, works on assembly level and provides significant traces once a sample is loaded in memory, even though the sandbox detonation is not completed, resulting in enhanced detection for our customers. Traditional sandboxing environments were built by running virtual machines over one of the available virtualization solutions (VMware, VirtualBox, KVM, Xen) which leaves huge gaps for evasive malware to exploit. Malware authors continue to improve their creations by adding new techniques to bypass security solutions and evasion techniques remain a powerful means of detecting a sandbox. As technologies improve, so also do malware techniques. Sandboxing systems are now equipped with advanced instrumentation and emulation capabilities which can detect most of these techniques. However, we believe the next step in sandboxing technology is going to be the bare metal analysis environment which can certainly defeat any form of evasive behavior, although common weaknesses will still be easy to detect. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:98f25a2a-656c-46d7-a9b2-2f88b7357457>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/other-blogs/mcafee-labs/evolution-of-malware-sandbox-evasion-tactics-a-retrospective-study/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00404.warc.gz
en
0.937941
2,290
2.515625
3
Radiation therapy often is used to treat cancer patients. Now, doctors at Washington University School of Medicine in St. Louis have shown that radiation therapy — aimed directly at the heart — can be used to treat patients with a life-threatening heart rhythm. They treated five patients who had irregular heart rhythms, called ventricular tachycardia, at the School of Medicine. The patients had not responded to standard treatments and collectively experienced more than 6,500 episodes of ventricular tachycardia in the three months before they were treated with radiation therapy. In ventricular tachycardia, the heart beats exceedingly fast and its chambers often fall out of sync, interfering with blood flow and placing patients at risk of sudden cardiac death. When delivered directly to problematic areas of the heart muscle, the radiation therapy resulted in a dramatic reduction in the number of ventricular arrhythmia events in these patients, as measured by their implanted defibrillators. An analysis of the patients’ experiences is reported Dec. 14 in The New England Journal of Medicine. There have been two previous cases reported of treating ventricular tachycardia with radiation therapy, but this is the first to do so in an entirely noninvasive process, from imaging to treatment. “As a radiation oncologist who specializes in treating lung cancer, I’ve spent most of my career trying to avoid irradiating the heart,” said senior author Clifford G. Robinson, MD, an associate professor of radiation oncology. “But I also have been exploring new uses for stereotactic body radiation therapy that we use almost exclusively for cancer.” At the same time, cardiologist and first author Phillip S. Cuculich, MD, an associate professor of medicine, was looking for new ways to treat ventricular tachycardia in patients who did not respond to conventional treatments. Ventricular tachycardia is estimated to cause 300,000 deaths per year in the U.S. and is the leading cause of sudden cardiac death. Standard therapy includes medication and invasive procedures that involve threading a catheter through a vein into the heart and selectively burning the tissue that causes the electrical circuits of the heart to misfire. “These patients have defibrillators implanted to act like a paramedic and save their lives if a bad heart rhythm starts up,” Cuculich said. “The device recognizes a dangerous arrhythmia and can deliver a life-saving electrical shock. While it’s wonderful that we can stop people from dying in that situation, the shock can be a traumatic event. Patients understand that they have just avoided death. And when this happens repetitively, often without warning, it can be devastating for patients.” Ventricular tachycardia often develops after injury to the heart, commonly following a heart attack. As the heart muscle attempts to heal, the resulting scars interrupt the proper flow of electrical impulses. Traditional catheter ablation essentially kills off the tissue that triggers the electrical misfires. But the procedure is too risky for many patients with additional medical problems, and the arrhythmia often returns after a period of time. The five patients in the study had undergone catheter ablation procedures and their ventricular tachycardia returned, or they were unable to go through the procedure because of other high-risk medical conditions. One patient was on the waiting list for a heart transplant. Four of the patients were in their 60s; one patient was over age 80. In the three months before treatment with noninvasive radiation therapy, the five patients together experienced more than 6,500 ventricular tachycardia events. The average number of events per patient during this time was 1,315, with a range of five to 4,312. During the first six weeks following radiation therapy, as the patients were recovering, they experienced a total of 680 episodes. In the one year the patients continued to be followed, they collectively had four events. Two patients didn’t experience any episodes at all. The investigators are cautious, saying they are still monitoring for long-term side effects of radiation therapy, such as lung scarring and further damage to the heart itself. They emphasized that their use of external radiation to the heart only included very ill patients in end-stage disease who had run out of options. More research is required before doctors might consider this approach for younger, healthier patients or as a possible addition to standard therapies. “A lot of my work is focused on reducing toxicity of radiation therapy using modern technology,” Robinson said. “These patients have done quite well in the first 12 months after therapy, which is enough time to see the early toxicities. But we’re continuing to monitor patients for long-term side effects.” The single dose of radiation these patients received is on par with what might be given to a patient with an early-stage lung tumor. Doctors can target such tumors with a large dose of radiation given once or up to five times. The preparation and mapping of the anatomy and electrical circuits of the heart is time-intensive, but the treatment itself takes 10-15 minutes, the researchers said. “A traditional catheter ablation procedure can take six hours or more and requires general anesthesia,” Cuculich said. “This new process is entirely noninvasive. We take pictures of the heart with various imaging methods — MRI, CT, PET scans But the unique piece is the noninvasive electrical mapping called electrocardiographic imaging. This allows us to pinpoint where the arrhythmias are coming from. When we overlay the scar mapping with the electrical mapping, we get a beautiful model of heart function that lets us see not only where the arrhythmia comes from, but where it might progress. “Based on these maps, Dr. Robinson is then able to deliver the energy entirely noninvasively,” Cuculich added. “It’s simply amazing to see a ventricular tachycardia patient get an ablation therapy for a few minutes and then get up off the table and walk out the door.” Electrocardiographic imaging (ECGI) was developed by co-author and Washington University biomedical engineer Yoram Rudy, PhD, the Fred Saigh Distinguished Professor of Engineering. The radiation therapy does not take effect immediately. The number of arrhythmia events went down but did not disappear in the first six weeks after treatment, which the doctors characterize as a recovery period. After that six-week period, however, the number of events dropped to almost zero. Patients were able to slowly come off medications used to control the arrhythmia. Of the five patients, one patient died in the first month after treatment of causes unlikely to be related to treatment. This patient, who was over age 80, had other heart conditions in addition to ventricular tachycardia. The remaining four, who are all in their 60s, are alive two years after radiation therapy. The patient on the transplant list went on to receive a new heart. One whose arrhythmia continues to be controlled also is dealing with gradual heart failure, meaning the heart muscle is weakening over time, and has received a left ventricular assist device. Two patients continue to live unassisted without ventricular tachycardia. The researchers currently are enrolling patients in a clinical trial to further evaluate this approach and, to date, have performed the procedure on 23 patients.
<urn:uuid:4a0e15aa-f946-4788-a310-b9a4d7467c81>
CC-MAIN-2022-40
https://debuglies.com/2017/12/15/deadly-heart-rhythm-halted-by-noninvasive-radiation-therapy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00404.warc.gz
en
0.941536
1,580
3.203125
3
A Cracker is an individual who breaks into a computer accounts, systems, or networks and intentionally causes harm. A cracker can be doing this for profit, for malicious reasons, for a social cause, or just because the challenge is there. Once inside a system, crackers typically operate without regard to the safety and security of your data, often acting in ways that cause unnecessary harm or destruction. Some security researchers divide “Crackers” into two groups. A very small portion of self-described “Crackers” are capable of researching, testing, and building proof-of-concept exploit code that “cracks” or compromises a commercial product, computing device, or network. These individuals are few and far between. The majority of Crackers are simply “Script Kiddies” who purchase other “Hackers” exploit toolkits and run them to gain access to networks, devices, and accounts they are seeking to exploit. This is the larger group of “Crackers”. Additional Reading: The Top Ten Password-Cracking Techniques Used By Hackers
<urn:uuid:ffeebf7a-f798-4c59-8a0c-e5ade394f2b3>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/cracker/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00604.warc.gz
en
0.931337
228
3.28125
3
Unknown attackers stole from the online investing firm, Robinhood. On November 8, Robinhood published a statement, informing the public that a data breach occurred on November 3, exposing the email addresses of some 5 million customers as well as the full names of another 2 million accounts. More extensive information of 310 individuals was exposed in the attack as well. The breach was the result of a social engineering attack in which an attacker called a customer support employee and was able to obtain access to customer support systems. Robinhood also confirmed that an extortion demand was also received shortly after the attack but did not elaborate further on it. It is believed that the demand likely involved the threat of releasing the stolen data. Robinhood released an update statement a week later, disclosing the fact that 4,400 phone numbers were compromised as well as other text data entries that were being analyzed. Phone numbers are increasingly being coveted by hackers because so many multifactor authentication (MFA) systems utilize mobile phones. Hackers have learned to steal phone numbers and then port them over to burner phones to impersonate the victim and seize control of their online accounts. |IDENTIFY INDICATORS OF COMPROMISE (IOC)| Social engineering is all about psychologically manipulating a victim to either perform a desired action or disclose information that can be useful in the attack. The most prevalent example of social engineering is phishing attacks in which users are encouraged to initiate a wire transfer or asked to logon to what they think is a legitimate site that will in turn expose their credentials. Social engineering can also involve other forms of communication such as text, social media, and the telephone. The attackers knew their corporate victim well, as Robinhood had just recently expanded the customer service department in order to provide 24/7 customer support. They preyed upon newly trained employees still navigating proper protocols and exploited that vulnerability. |CONTAINMENT (If IOCs are identified)| Robinhood is asking that all customers scrutinize their emails for phishing attacks, particularly ones that appear to be coming from Robinhood. If possible, customers are encouraged to enable multifactor authentication for their accounts and check for messages using the Robinhood app rather than depending on email. Those customers that want to call Robinhood should refer to the phone numbers listed within the app and should never consider calling a Robinhood phone number delivered by email. Customers should only utilize the Robinhood app to social interact with Robinhood representatives. In addition to its communicative efforts, Robinhood took the measures of contacting law enforcement about the incident and acquired the help of an outside security firm to address the attack. The Robinhood breach is a classic example of how important it is to have a multilayer security strategy in place. In this case, tools such as endpoint security, email filtering and a perimeter firewall would not have thwarted the phone call made by the attack actor, which then made the attack possible. Hackers use multiple attack venues to breach your network. A thorough risk assessment can shine light on the many risk exposures of your enterprise. To stop an attacker, you must think like an attacker. The incident also underscores the importance of cybersecurity training across your company to educate employees for the signs they need to look for when confronted with a given situation. Strict adherence to the principle of least privilege and the practice of promoting a zero-trust environment both play important roles in combatting social engineering threats such as this. The principle of least principle requires that users be allotted the minimum level of access needed to perform their job functions. Zero trust networking states that trust is established only through continuous authentication and monitoring of every access attempt regardless of whether it was initiated within the network itself. Prepare for cyber threats through an Incident Response Readiness program HALOCK Breach Bulletins Recent data breaches to understand common threats and attacks that may impact you – featuring description, indicators of compromise (IoC), containment, and prevention.
<urn:uuid:62d23e68-49c3-48a6-95f9-c41bedd4753c>
CC-MAIN-2022-40
https://www.halock.com/attackers-steal-from-robinhood-and-affect-7-million-accounts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00604.warc.gz
en
0.963004
802
2.59375
3
1COBOL: 10 Reasons the Old Language Is Still Kicking by Darryl K. Taft 2COBOL Is Easy Learning COBOL isn’t like learning a completely new language: It’s English. It consists of English-like structural components such as verbs, clauses and sentences. 3COBOL Runs Everywhere COBOL has been ported to virtually every hardware platform. 4Here Today, Here Tomorrow COBOL will work tomorrow as well as it does today: Businesses already using COBOL are likely to continue to use it rather than replace it. 5COBOL Gets the Numbers Right COBOL’s numeric processing functions make it a good choice for applications where the tiniest fractional rounding error can make a crucial difference. 6COBOL Supports Popular IDEs Developers can use COBOL with their favorite IDE. There’s no need to worry about learning a new toolset. You can develop COBOL applications using Visual Studio or Eclipse. 7COBOL Systems Process Data Quickly COBOL systems use indexed data files that maintain internal B-tree structures, providing rapid access to data even when data stores run into terabytes. 8COBOL Is Self-Documenting COBOL does not require the same level of commenting as other languages, which helps to make maintaining someone else’s COBOL code easier. 9COBOL Is Fast COBOL has more than 50 years of optimizations under its belt, so it has good performance. 10COBOL Integrates With Everything By combining COBOL skills with systems in use today, you can enhance existing COBOL applications in the Web, mobile and cloud. 11COBOL Is Everywhere We are surrounded by COBOL: It runs over 70 percent of the world’s business transactions. It makes sense to replenish the supply of COBOL programmers by training new ones.
<urn:uuid:05d77c8f-d0dc-4368-abef-178e7ad2b9d7>
CC-MAIN-2022-40
https://www.eweek.com/development/cobol-10-reasons-the-old-language-is-still-kicking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00604.warc.gz
en
0.854094
416
2.6875
3
The debt to assets ratio refers to the percentage of the overall assets of a company being financed by debt instead of equity. The ratio is typically used to determine the economic risk of an organization. A high ratio suggests that a majority of the assets of the organization are being financed by debt. Assets include tangible or intangible assets. These include capital assets, inventory, and goodwill. It also includes the working capital of the company. The debt ratio is the percentage of total assets financed by debt instead of equity. Assets can either be free-floating or fixed. In a free-floating assets, there is no interest rate risk. However, fixed assets do have interest rate risk. Fixed assets include real estate, fixed investments, and other assets owned by the company. Fixed assets do not fluctuate in value, therefore they are less risky than free-floating assets. Fixed assets include tangible assets like buildings, machinery, and vehicles. They also include intangible assets like software and information. All types of assets provide future income if there are future profits. An organization should consider its fixed assets as one unit. When determining its debt-to-asset ratio, an organization should include all of its fixed assets. Assets are more expensive than liabilities because they are fixed and not subject to fluctuation. A company must use money to purchase assets or borrow money in order to fund their fixed assets. The money used to purchase assets should match the amount of cash flow expected from the assets. If there is a large increase in cash flow expected from the assets, the debt-to-asset ratio will increase because more cash will be required. In addition to purchasing assets, a corporation must also pay interest on those assets. Therefore, the debt-to-asset ratio will increase when interest rates are falling. The assets that an organization utilizes for its business operations, like buildings, machinery, and equipment, are usually the largest portion of the assets owned by an organization. This portion of the assets is considered the fixed assets. The other portion of the assets owned by an organization consists of the working capital. and the finance-related assets. The finance-related assets are those assets that a firm has available that are used to make payments on debts, but have no direct interest in them. These assets include receivables, accounts receivable, loans, credit cards, vendors and suppliers, inventories, and accounts payable. Debt to assets ratios are used by banks and other financial institutions to determine the equity and risk of a company. They take into account the size and liquidity of the assets of an organization. Other companies’ ratios that are used by companies include the credit to debt ratio and the net debt to assets ratio. These ratios look at the ratio of total debt to the company’s assets. and include cash, investments, accounts receivable, and accounts payable. They also include the ratio of assets to fixed assets. The debt to assets ratio of an organization also takes into account the amount of debt owed on its equity. This is a great measure of the financial strength of the organization because it represents the total amount of money that the company needs to pay back to each owner of its assets, rather than all debt balances. Debt to equity is often considered a better measure of a company’s financial health because it accurately represents the current debt burden and the amount of equity that must be paid off to get out of debt. Because a company’s debt load reflects the company’s ability to pay back its equity, it tells investors how much equity the company has available to pay back. As a result, the debt to equity is often a better measure of a company’s ability to make future payments than the balance sheet or book value.
<urn:uuid:2c9486a7-4dd2-4076-8b8e-ca0eda49f113>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/using-debt-to-assets-ratios-to-assess-a-companys-debt-burden/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00604.warc.gz
en
0.967616
773
3.328125
3
Several time synchronization mechanisms can be used in a network. The most common standards are Network Time Protocol (NTP) and Precision Time Protocol (PTP). NTP, which is the older, well-known protocol, is currently in its fourth version. NTP was primarily developed to achieve accuracy in the submillisecond range and is widely implemented for network timekeeping. Because NTP is based on software timestamping [It’ll be explained later in this article], it can be less accurate for certain industrial applications that require high levels of synchronization. PTP is more accurate than NTP because it uses hardware timestamping. It also accounts for device latency while synchronizing time. NTP synchronizes clocks with millisecond accuracy What is PTP? Precision Time Protocol (PTP) is a network-based time synchronization standard that is designed for distributing precise time and frequency from a clock source over packet-based networks. PTP enables switches and routers to deliver synchronization with a higher level of accuracy than NTP. It is suitable for today’s cloud networks and data center infrastructures. PTP is capable of synchronizing multiple clocks to better than 100 nanoseconds on a network specifically designed for IEEE-1588. PTP Clock Types The types of clocks available are as follows: - Grand Master Clock (GMC): the reference time source derived from an accurate clock such as a GNSS driven clock (i.e. GPS, GLONASS, GALILEO) - Boundary Clock (BC): A network device that acts as slave to its master and as master to its slaves. - Ordinary Clock (OC): A clock that operates either as a Master or a Slave. In the case of a slave, the end point whose clock is been synced (normally a host/server). - Master Clock (MC): A clock that operates as a Master and derives its timing capabilities from the clock chain up to the GMC. It typically serves as a port on a BC connected to a host running as a slave. - Transparent Clock (TC): A device that calculates the residence time of the PTP event message and updates the correction field (CF) of the event message before forwarding the message. The ports are not in any specific state. So, GM usually has a precise time source, such as a GPS, that functions as the reference clock. Boundary Clock (BC) acts as a secondary clock at the port that connects to the primary and distributes time to all other downstream devices. The secondary port synchronizes the time from the upstream PTP device. Transparent Clock (TC) forwards the PTP message after updating the residence time of a PTP event message. The Master clock provides synchronization messages that the Slaves use to correct their local clocks. Precise timestamps are captured at the Master and Slave clocks. These timestamps are used to determine the network latency which is required to synchronize the Slave to the Master. There is a sync message transmitted typically every two seconds from the Master, and a delay request message from the Slave less frequently, about one request per minute. Four timestamps are captured between the Master and Slave clock. The timestamps are required for the Slave offset calculation. The timestamps are commonly referred to as T1, T2, T3, and T4. Two delay paths must be calculated, the Master to Slave and the Slave to Master. First, find the Master to Slave difference: The first timestamp is T1. It is the precise time of the sync message from the Master. This timestamp is sent in the follow-up message since the time of T1 was sampled when the sync message was transmitted on the Ethernet port. The second timestamp is T2. It is the precise time of the sync message as it is received at the Slave. The Master-Slave difference can be calculated once T1 and T2 are available at the Slave: Master to Slave difference = T2 – T1 Second, find the Slave to Master difference: The third timestamp is T3. It is the precise time of the delay request message from the Slave. The fourth timestamp is T4. It is the precise time of the delay request message when received at the Master. The Slave to Master difference can be calculated once T3 and T4 are available at the Slave. Slave to Master difference = T4 – T3 The one-way delay can be calculated once the Master to Slave and Slave to Master difference is available at the Slave: One-way delay = (Master to Slave difference + Slave to Master difference) / 2 The offset is used to correct the Slave clock: Offset = Master to Slave difference – One-way delay or Offset = ((T2 – T1) – (T4 – T3)) / 2 Hardware Timestamping Vs. Software Timestamping The IEEE-1588 protocol does not define how to implement PTP into a Master or Slave. Two methods have been adopted for PTP over Ethernet: PTP software timestamps and PTP hardware timestamps. The following paragraphs describe these methods. Grandmaster with Hardware Timestamping While locked to GPS, the Grandmaster clock can provide precise nanosecond timestamp resolution and accuracy better than 30 nanoseconds referenced to GPS. A Grandmaster clock incorporates a local reference oscillator that is disciplined to GPS. This oscillator is the reference clock used with dedicated hardware for the precise timestamp of the incoming delay request and outgoing sync packets. Using an oscilloscope, the 1PPS (1 pulse-per-second) output from the Grandmaster can be compared to a 1PPS output from a hardware slave to measure synchronization accuracy. The dedicated hardware approach is unaffected by the operating system or network traffic latency. Slave with Hardware Timestamping Hardware timestamps with a PTP software daemon provide precise nanosecond timestamp resolution with dedicated hardware typically in a PCIe form factor. The hardware slave solution has many advantages over the software slave such as an improved oscillator, a 1PPS output for measurements compared to the master, and dedicated hardware that is unaffected by the operating system latency. Synchronization of better than 100 nanoseconds is achievable using either a crossover cable or a 1588 Ethernet switch. Slave with Software Timestamping Software-only implementations utilize existing computer hardware and a PTP daemon. The slave software solution must compensate for the internal oscillator on the computer motherboard using software timestamping. The local oscillator on the motherboard is typical of poor quality and the software timestamping is affected by the operating system latency. Measuring the software slave to the master is limited to log file statistics, as there is no 1PPS output to compare with the master. Synchronization of 10 microseconds is achievable with a software slave to a master with typical results between 10 to 100 microseconds. The protocol defines the event and general messages. Event messages are timed messages in which an accurate timestamp is generated at both transmit and receive time. General messages do not require accurate timestamps. - Sync: Sent by the primary to distribute the time of day. - Delay_Req: Sent by the secondary to the primary for end-to-end delay measurement (request-response delay mechanism). - Pdelay_Req: Sent by link node A for peer-to-peer delay measurement. - Pdelay_Resp: Sent by link node B for peer-to-peer delay measurement. - Follow_Up: Sent by two-step clock primary following the sync message. - Delay_Resp: Sent by primary for end-to-end delay measurement. - Pdelay_Resp_FollowUp: Sent by link node B for peer-to-peer delay measurement. - Announce: Sent by the primary to establish a synchronization hierarchy. The Best Primary Clock Algorithm decides the best primary clock based on the clock properties in the announce message. - Management: Sent by the management node to the clock for updating and querying the PTP datasets maintained by the clock. - Signaling: Sent by clock A to clock B for the purpose such as the unicast negotiation. The clocks managed by PTP follow a master-slave hierarchy. The slaves are synchronized to their masters. The hierarchy is updated by the best master clock (BMC) algorithm, which runs on every clock. The clock with only one port can be either master or slave. Such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and slave on another. Such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock. The grandmaster clock can be synchronized with a Global Positioning System (GPS). This way disparate networks can be synchronized with a high degree of accuracy. The hardware support is the main advantage of PTP. It is supported by various network switches and network interface controllers (NIC). While it is possible to use non-PTP-enabled hardware within the network, having network components between all PTP clocks PTP hardware-enabled achieves the best possible accuracy. - Dell Technologies. - EndRun Technologies.
<urn:uuid:35dc52b3-cfc6-40b5-a996-71a74c901a03>
CC-MAIN-2022-40
https://moniem-tech.com/2022/05/13/everything-you-have-to-know-about-ptp-or-precision-time-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00604.warc.gz
en
0.91839
1,944
4.4375
4
Individuals organizations connect computers in a network for many reasons including, sharing of hardware accessories, software, data, information and to facilitate communications between different departments. Basically there are three different types of network. These three are further classified. I will Discuss that on next post. Types of network:- - LAN (Local Area Network) - MAN (Metropolitan Area Network) - WAN (Wide Area Network) Computers(nodes) are spread over metropolitan area such as a city and its suburbs.It is a large network that covers an entire city.It is connected within a single city or metropolitan area. It is effective and the speed of data communication is higher. MAN can be operated by a single organisation or can be shared by several organisations in the same city. 3.WAN:- Wide Area Network is a communication system that establishes a network in which the
<urn:uuid:c5ac00f8-aeb7-404d-b4d3-6bed67943b9a>
CC-MAIN-2022-40
https://www.cyberkendra.com/2013/04/what-is-computer-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00604.warc.gz
en
0.948194
180
3.421875
3
Though aimed at being a convenient alternative, telehealth practices must do their best to replicate in-person visits. With that being said, all data being shared between patient and provider is done virtually, meaning that extra precautions should be taken to protect sensitive personal health information (PHI). Experiencing a data breach of any kind can be damaging to both an individual and an organization. Whether you are new to telehealth or have been conducting your practice for some time, you should know that protecting PHI is the law. A solid defense system to outside (and sometimes inside) threats will help to reassure your patients that a telehealth system is a viable option for their healthcare needs. Implement a multi-factor authentication system Implementing two-step verification, otherwise known as multi-factor authentication (MFA) to ensure the right individuals are accessing your platform for appointments and data is a great start. According to Microsoft, there are over 300 million fraudulent sign-in attempts made towards their cloud services every day. MFA blocks 99.9 percent of these automated cyberattack attempts on Microsoft platforms, websites, and other online services. Whether you are using a Microsoft system or another program, your results may vary, though they should produce similar outcomes. MFA can come in the form of a security question, a key-code (usually received by text message or e-mail), or other similar protocol. Having an MFA process throughout many access points of your platform is recommended. Usually, and as a way to not inconvenience your patients, most systems will remember logins from recent devices, meaning that they won't have to undergo the same process every time they attempt to access the platform. Data encryption is a must No matter what platform you decide to host your telehealth practice on, be sure that data encryption is included. This will be important to the safety of PHI when it is collected, stored, or moved. In their work on telehealth security practices, Joseph L. Hall and Deven McGraw explain data encryption as electronically "locked" material that uses complex mathematics and encryption "keys" to ensure that if an attacker does gain access to the raw data, it would likely be rendered useless. Telehealth and HIPAA compliance (United States) HIPAA (Health Insurance Portability and Accountability Act of 1996) is legislation that provides data privacy and security guidelines for protecting PHI. When it comes to telehealth, HIPAA sets out its guidelines within their security section, which state the following: - Only authorized users should have access to ePHI. - A system of secure communication should be implemented to protect the integrity of ePHI. - A system of monitoring communications containing ePHI should be implemented to prevent accidental or malicious breaches. When looking to store ePHI using a third-party cloud system, you will want to be sure that they can provide you with a Business Associate Agreement (BAA). Any individual or entity (third party) that performs functions on behalf of a covered entity, where PHI is accessed by the third party, is considered a business associate. When entering into a BAA with a third-party, be sure that the agreement outlines the methods that will be used by them to ensure the protection of the data, as well as regular maintenance procedures. You can learn more about HIPAA requirements for telehealth at the U.S. Department of Health & Human Services website. When looking to welcome new telehealth patients, providers must promote their ability to protect PHI. Many who are new to this type of service or new to technology in general may be reluctant to share PHI over the internet, so it is important to provide reassurance, backed by a solid system. Giva offers a telehealth and remote patient monitoring support model that is compliant with the standards and structures used in the IT infrastructure library (ITIL) framework to maximize the return on investment of your help desk while maintaining best practices of data security. Furthermore, HIPAA compliance, quick deployment, and leading customer service will have you ready to support patients in no-time. Visit our website to learn more about Giva's HIPAA-compliant Telehealth Help Desk program.
<urn:uuid:fcbef665-329e-4622-931d-996088440071>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2020/9/10/3-important-ways-to-protect-data-in-your-telehealth-practice
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00604.warc.gz
en
0.942888
862
2.625
3
One of the initial tasks artificial intelligence (AI) failed miserably at was facial recognition. It was so bad that it has created a significant grassroots effort to block all facial recognition, and IBM, which pioneered facial recognition, exited that part of the AI market. At the core of the problem was biased data sets that had unacceptable issues with minorities and women. We’ve learned from companies like NVIDIA that are aggressively using simulation to train self-driving cars and robotics that using simulation and related training at machine speeds can increase the accuracy of autonomous machine training significantly. I met recently with a company called Datagen that uses synthetic people to create unbiased facial recognition programs and potentially make metaverse-based collaboration systems more effective. Let’s explore the use of synthetic people to improve AI accuracy and create the next generation collaboration platforms: AI’s and biased data Clearly, we now know that biased data sets lead to embarrassingly inaccurate AIs, suggesting that market researchers who are trained to identify and eliminate bias as a matter of practice should have been brought in to create practices that would lead to less biased data sets. Using live data, it is virtually impossible to eliminate all bias, without making the data sets so large that they become unmanageable. To correct this, the creation of synthetic humans that highlight the unique differences and don’t over-emphasize the similarities becomes an interesting way to increase the accuracy of computer vision-based efforts. These synthetic humans can be used for both training and testing. Although it is inadvisable to use the same data set for both, as that would simply ensure the data set was implemented without errors and not catch errors in the data set itself. You can also run the synthetic data set against real data to both look for bias in the real data set and any unintended bias in the synthetic training set. With synthetic data, you can also use the result, without privacy violations, for a variety of other functions. These would include broader metaverse efforts where realistic artificial people will enhance the apparent reality of the related simulation. Say, for instance, you wanted to showcase the light coverage in the interior of a building once it was occupied. Using real images would create licensing and privacy issues, whereas using synthetic images derived from a variety of people should not. It’s not just people This synthetic data doesn’t have to just apply to people either. It can be used in security systems in stores to identify shoplifting or help with automated checkout, improve hand tracking for virtual reality (VR) solutions, simulate uses for planned buildings to remove inefficiencies long before construction starts, and improve body tracking accuracy for everything from protecting drivers to improving marketing programs. And it would be very handy for home security, in terms of identifying packages and better alerting to porch pirates. It can even be used to help with facial reconstruction after an accident, but one of the most interesting applications is with collaboration products. Meta is aggressively looking at metaverse-based collaboration where you are represented by an avatar. Avatars, though, can look more like cartoons than people. You can’t use the video image of someone because, in this implementation, most are wearing VR glasses, which are off-putting to everyone in the conversation. What you need is a level of accuracy more like deep fakes, where you look like you and your body and facial expressions appear realistically on your avatar. Datagen demonstrated a far more realistic avatar technology using its computer vision algorithms coupled with eye, face, and body tracking, with a particular focus on hand tracking. With Datagen’s technology, you shouldn’t need to use a controller, as your hand is your controller. Instead of floating around legless, your entire body is rendered in a more photorealistic way. While Datagen’s current capability is far better than some alternatives, it is still on the wrong side of the uncanny valley in my opinion. But it should improve sharply over time to the point where you can’t tell the difference between an avatar and a real person. This would allow you to freeze your appearance at your favorite age and dress digitally and be able to attend meetings in your pajamas if you want and still look like you are professionally dressed during a remote video call. Turning a video feed into actionable data that could be accurately interpreted by an AI is critical to the advancement of everything from security reporting and access technology, including facial biometrics, to autonomous machines. Our future automation efforts will depend on us getting this right and correcting the current lack of trust for facial recognition solutions. Datagen has a set of tools that could massively increase this accuracy and benefit efforts that include far more viable metaverse-based collaboration and communications. While young, Datagen appears to be at the forefront of improving computer vision substantially and building future tools that will help us create stronger AIs and a far more accurate metaverse.
<urn:uuid:a2031776-9343-4acb-9a3c-306aedfc6b7a>
CC-MAIN-2022-40
https://www.datamation.com/artificial-intelligence/datagen-creating-smarter-ai-synthetic-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00604.warc.gz
en
0.948412
1,023
2.9375
3
At the end of April, scientists at IBM announced two critical advances that could eventually lead to the development of a quantum computer. For the first time, they demonstrated an ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrate a new square quantum bit circuit design that could successfully scale to larger dimensions. Baseline caught up with Jay Gambetta, manager of the Theory Quantum Information Group at IBM, and tapped his thinking about how these developments could affect the technology and business worlds. Baseline: What is quantum computing, and why should business and IT executives pay attention to this technology? Jay Gambetta: Quantum computing is an alternative to classical computing, and, as the name implies, it uses quantum mechanics to compute. Unlike classical computing, which uses a zero or one and must be in one of these two states at any given moment, quantum can be in both a zero and a one state at the same time. This is known as a “superposition.” Quantum computing uses these superpositions—as well as another concept called entanglement—to compute in an entirely different way. It uses qubits [Quantum bits] that can compute over multiple paths simultaneously. In a practical sense, a quantum computer would deliver answers at far greater speeds than today’s digital computers, including supercomputers. Baseline: Is a quantum computer simply faster, or will it address new and different tasks? Gambetta: It would be much faster, but there’s a difference between what you can do classically and what you can do with quantum. Historically, quantum computers have been explored as a cryptography tool. But nature is fundamentally quantum, and these computers could possibly be used to explore chemical reactions and things in nature. This could, for example, result in new drugs. But anything that requires an understanding of quantum physics could advance through this approach. Baseline: What does the IBM qubit circuit look like? Gambetta: It is based on a square lattice of four superconducting qubits on a chip that’s roughly one-quarter-inch square. This enables both types of quantum errors to be detected at the same time. The square-shaped design avoids problems associated with a linear array, including the detection of both kinds of quantum errors simultaneously. The square has allowed us to go to the next stage. It allows us to detect for both flip bit and phase errors. This is all necessary for a quantum computer. We are now at the starting point for more complicated systems. Baseline: What are the biggest remaining challenges? Gambetta: Quantum information is very fragile. qubit technologies lose their information when interacting with matter and electromagnetic radiation. However, we are now at the point where qubits can be designed and manufactured using standard silicon fabrication techniques. Once a handful of superconducting qubits can be manufactured reliably and repeatedly, and controlled with low error rates, there will be no fundamental obstacle to demonstrating error correction in larger lattices of qubits. Baseline: What is the takeaway from this breakthrough, and how far away are researchers to achieving a quantum computer that could produce real-world results? Gambetta: This experiment has helped move us from the theoretical to an ability to conduct more complicated experiments. Quantum computing appears to be about 10 to 20 years away. We are still in the early stages, but we are beginning to see significant advancements in this space. The technology would help solve problems that we cannot solve today due to limitations in computing power. Photo courtesy of IBM.
<urn:uuid:a2ba6ad4-7045-41bb-a22e-e89a617fea0a>
CC-MAIN-2022-40
https://www.baselinemag.com/innovation/why-should-business-care-about-quantum-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00604.warc.gz
en
0.944365
736
3.171875
3
Routers select best routes based on the following criteria: - Longest prefix match: Routers select routes with the longest match to the destination address in the forwarded packet. For example if a packet is destined to 192.168.12.1 and the router has 192.168.0.0/16 and 192.168.12.0/24 in its routing table, it will forward the packet using the 192.168.12.0/24 route. Administrative distance: If a router is receiving the same route from multiple routing protocols it will install the route with the lowest Administrative distance in the routing table. For example if the router is receiving 192.168.12.0/24 from both OSPF (AD:110) and RIP (AD:120) the OSPF route will be selected. The following table is listing from lowest to highest: Routing protocol Default AD Connected interface 0 Static route 1 Enhanced Interior Gateway Routing Protocol (EIGRP) summary 5 External Border Gateway Protocol (BGP) 20 Internal EIGRP 90 IGRP 100 OSPF 110 Intermediate System-to-Intermediate System 115 Routing Information Protocol (RIP) 120 Exterior Gateway Protocol (EGP) 140 On Demand Routing (ODR) 160 External EIGRP 170 Internal BGP 200 Unknown 255 - Metric: If the router is receiving the same route many times from the same routing protocol it will consult the metric value for its selection; the lowest the best. If routes has the same metric both will be installed in the routing table and the router will load balance packets over them. CISCO routers install up to 4 equal metric routes (IGP) by default in the routing tables and you can manipulate the number using the command maximum-paths under the protocol configuration mode.
<urn:uuid:294ba9e5-3858-484b-bb57-27c0a80eec4f>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2008/07/how-routers-select-best-routes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00604.warc.gz
en
0.816243
386
2.578125
3
Lights and sensors monitoring the structure and environmental conditions around New Hampshire’s Memorial Bridge are powered by underwater turbines. According to Erin Bell, the project to transform New Hampshire's Memorial Bridge over the Piscataqua River into a self-diagnosing, self-reporting “smart bridge” resulted from a fortuitous sequence of events. Bell, an associate professor of civil and environmental engineering at the University of New Hampshire, said a colleague in the School of Marine Science and Ocean Engineering came to her in 2013 with the idea of conducting a three-month test using the bridge before it was torn down to make way for a new structure. The plan was to attach an underwater turbine to the old Memorial Bridge that would generate electricity from the tidal motion of the water in the river. As she began making inquiries with the state, Bell was surprised when the contractor building the replacement bridge asked whether the turbine could instead be sited on the new vertical-lift bridge for a long-term deployment. “So that’s where the project started,” she said. One part of Bell’s work involves using sensors for structural monitoring. “We would put sensors on the girders to make sure we are not over stressing [the bridge],” she said. Then it occurred to the team that they could power the sensors with electricity generated by the tidal turbine. Bell decided to apply for grant from the National Science Foundation’s Partnerships for Innovation program, and that led to further expansion of the project. The program, Bell explained, encourages interdisciplinary research because it requires grant recipients to work with a state agency and with at least two industry partners. After the team put in a proposal, reviewers said “they wanted us to have more of a social science presence,” Bell said. “So that’s how we added our sociologist who is doing public surveys and looking at how people receive scientific information, how they feel about renewable energy and how they feel about smart infrastructure.” According to Bell, however, there was a problem. Adding a social scientist meant there wasn’t enough funding to build the turbine and deploy the sensors. Fortunately, she said, the state was willing to step up, and the team was awarded $400,000 by the New Hampshire Department of Transportation to expand the sensor network on the bridge. Funding in hand, the team is preparing to install a suite of sensors on the bridge, including accelerometers, tilt meters and “strain brevets” that measure pressure. All the sensors are wired to a node that transmits the data via Bluetooth to a hub. The team is also installing environmental sensors to monitor water quality, including salinity, turbidity and temperature, and an acoustic Doppler current profiler will be used to collect data about water flows. Finally, the researchers will install a weather station on the bridge to monitor temperature and humidity. That data is important because engineers have found that high winds affect structures like the Memorial Bridge differently depending on the temperature and humidity. “There are a lot of things we will learn about the future of bridge design,” Bell said. And what the team learns may also help bridge managers make better decisions about maintenance and daily operations, such as raising a span to let shipping traffic sail underneath. With the data, she said, bridge operators will know “when they shouldn’t lift rather than just saying, ‘I think it’s too windy so were not going to do a lift today.’” The three-meter-wide tidal turbine will power not only the sensors but also the bridge lighting. The idea, Bell said, is to make the infrastructure independent of the electricity grid. That idea was spawned from Hurricane Sandy, when there was no fuel or electricity, she said. “How would we lift this bridge if that happened here?” While the single turbine doesn’t generate enough power to lift the span on the Memorial Bridge, this project is only a demonstration of the concept. “When new bridges are built perhaps a larger array of these turbines can be integrated so the bridge will no longer be grid dependent,” she said. Bell sees the Living Bridge project as part of a paradigm shift in her field toward integrating technologies and disciplines to make infrastructure smarter. “The idea was to make the bridge a living laboratory, something that students from kindergarten to PhD candidates could learn from."
<urn:uuid:60fbb02d-788d-4dd0-958a-6cba8774642e>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2016/11/living-bridge-pioneers-smart-infrastructure/317807/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00604.warc.gz
en
0.959468
923
2.671875
3
What is SQL Injection and Can it Happen in an Oracle Database? Unfortunately, the quick answer is a resounding YES – Oracle databases are by no means immune to these attacks. SQL injections represent one of the most prominent and dangerous attacks, a staple inclusion in the OWASP Top 10. It is a code injection technique used to exploit vulnerabilities in the application layer to retrieve or corrupt the data they hold. One example is when a required user’s input is either incorrectly filtered, or the user’s input is not strongly typed and unexpectedly executed or the user gives an SQL statement instead. While SQL injections are known predominantly as a website attack vector, they are also used to attack any SQL database. Security of your applications that use Oracle databases is imperative to secure your data and reputation, especially as attackers have multiple automated tools at their fingertips to facilitate an SQL attack. Below are some tips on how to protect your web apps. Using MongoDB…? Read more about SQL injection in MongoDB Examples of SQL Injection in Oracle Attackers carrying out SQL injection attacks in Oracle will generally try to minimize the number of database calls in order to maximize their chances of success. One of the most popular tools used to carry out SQL injection in Oracle is an open-source tool called BSQL Hacker, used to discover exploits in the target web application. Some of the things that BSQL Hacker does include: - fingerprint database version, user details, and permission - changing attacker’s permissions to database admin - obtaining available data from the database One of the safest ways to defend from SQL Injection is to never, ever concatenate user input into your SQL queries. These inputs should always – and by always we mean without exception – be bound into the statement. As soon as you allow the end-user to input the code into your SQL statements, it’s as if you gave them the key to your apps. To carry out an SQL injection attack, an attacker must first find vulnerable user inputs within the web page or application and then input content, namely malicious SQL commands, which are in turn executed in the database. Successful attacks can gain total control over the affected database. Although they have devastating repercussions and widespread awareness, SQL injections remain commonplace, with most web applications remaining vulnerable in production. It is crucial to understand how to prevent SQL injection attacks and hackers from breaching your databases. How to Prevent SQL Injection in Oracle SQL injection can be prevented using proper programming techniques and robust testing as part of your development pipeline. Here are some tips that could help you to prevent SQL injection in Oracle and keep your application protected: 1. Input Validation You must take precautionary measures in order to ensure that the attacker cannot inject malicious code via forms that go out directly towards the database. This is the most common issue as the developers are unaware of the fact that loose input validation could result in catastrophic consequences. What you should at least do is limit the number of characters a user can send into a form field. So, in case you have a “first name” field, it should never contain more than 32 characters. 2. Minimum Permissions Another useful tip is to only grant minimum permissions possible to the end user. This means limiting their ability to edit content on your website as much as you can, because it ultimately means that you’re protected from the attacker potentially taking over as an admin on your website. 3. Static Statements What you should also be doing when writing queries and such is to try to always use static statements, where the attacker cannot inject dynamic content that changes your website. As an additional step in securing your web application, make sure that you’re binding variables whenever you can, given that it provides an extra layer of security. 4. Encrypt Confidential Data Encrypting the most important and confidential data makes sure that you add an extra layer of protection that might just make a key difference for your web app. Thus, you ought to do this whenever possible since it’s the most popular protection method ever since security took its place in modern IT. This one is pretty simple, yet extremely efficient in preventing malicious codes from entering your web apps. You can create a list of characters that a user cannot send via input forms, such as “<>/?*()&”, or even malicious statements like “SELECT”, which is automatically limiting the scope of vulnerability on your website. Detecting SQL Injection in Oracle Securing your applications is crucial nowadays. The data has never been more important and sensitive, which also means that the attackers are getting smarter by the day. This is why Bright Security’s developer-focused approach allows for non-security-minded developers to scan their applications and find all about potential vulnerabilities. In fact, you can sign up for a free account now in order to scan your applications and ensure that you avoid SQL injection and all other sorts of attacks on your apps. Hopefully this article sheds some light on the threat of SQL injection in your Oracle database. Security of your applications should never be taken for granted, which is why checking and testing for security is now a part of our everyday work as developers.
<urn:uuid:02c151b4-4340-48d5-8bd8-eeb3d10750b0>
CC-MAIN-2022-40
https://brightsec.com/blog/sql-injection-oracle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00004.warc.gz
en
0.929033
1,086
3.328125
3
The National Center for Atmospheric Research (NCAR) has launched Cheyenne, a 5.34-petaflop supercomputer dedicated to supporting research around Earth system sciences. More than triple the performance of the previous NCAR supercomputer, Yellowstone, it is ranked as the 20th most powerful system in the world according to Top500. The supercomputer was built by HPE’s recent acquisition, SGI, and features Intel’s 18-core, 2.3GHz Broadwell Xeon E5-2697v4 processors and Mellanox EDR InfiniBand. It has more than four thousand dual-socket nodes, 20 percent of which have 128GB of memory, while the rest have 64GB, giving it a memory capacity of 313 terabytes. The computer has a data storage of 20 petabytes of DataDirect Networks’ SFA14KX systems, with the option to expand it up to 40 petabytes. DDN’s system gives the computer a data transfer rate of 220 gigabytes per second. Cheyenne has a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed. The supercomputer is housed in the NCAR-Wyoming Supercomputing Center (NWSC), in Wyoming, and is scheduled to be in operation through 2021. A cloud computer The new supercomputer will be used to research various Earth system sciences, including climate change, weather, wildfires, seismic activity, solar disturbances, and for predicting how much power wind farms will generate. “Cheyenne will help us advance the knowledge needed for saving lives, protecting property, and enabling US businesses to better compete in the global marketplace,” Antonio J Busalacchi, president of the University Corporation for Atmospheric Research (UCAR), said. “This system is turbocharging our science.” UCAR manages NCAR on behalf of the National Science Foundation (NSF). Currently, the future of the NSF under the new Trump administration is less than clear, however. Obama appointee France Córdova has been told that she will remain in charge of the NSF, but it is not known if that is a temporary arrangement until a replacement is found. It is believed that Trump’s transition team has had limited contact with officials at the National Science Foundation. But scientists have expressed concern at what is unfolding at two better-known agencies - the Department of Energy and the Environmental Protection Agency. The Hill reports that the Trump administration is looking to cut DoE spending aggressively under expected new head Rick Perry, with the cuts modeled closely on proposals created by conservative thinktank the Heritage Foundation. The cuts would impact advanced scientific computing research and the nation’s exascale efforts. Over at the EPA, Myron Ebell - who led Trump’s transition team for the EPA, has said that he believes the President will gut the agency. “President Trump said during the campaign that he would like to abolish the EPA or ‘leave a little bit’. It is a goal he has and sometimes it takes a long time to achieve goals. You can’t abolish the EPA by waving a magic wand,” Ebell told The Guardian. Trump’s nominee to run the agency, Scott Pruitt, has been seen as a controversial pick due to his history of suing the EPA, and the fact that he has received more than $318,000 from fossil fuel companies since 2002 to fund election campaigns. Kevin Trenberth, a senior scientist at NCAR who was speaking on behalf of himself, not the government body, told Times-Call: “There are major concerns. The concerns stem from a number of statements that Trump himself has made, the appointments that he has nominated at several agencies that are very important to climate and related research, and the threats that have been bandied around in the process. There have also been threats to NASA, to do more space and planet research, instead of Earth-science research.” It is possible that, even if funding to the supercomputer itself and the NSF as a whole remains unaffected, a decrease in the number of grants available to climate scientists from other agencies will impact the number of researchers able to use Cheyenne. Speaking in regards to the new supercomputer, NCAR Director James W. Hurrell said: “Providing next-generation supercomputing is vital to better understanding the Earth system that affects us all. We’re delighted that this powerful resource is now available to the nation’s scientists, and we’re looking forward to new discoveries in climate, weather, space weather, renewable energy, and other critical areas of research.”
<urn:uuid:994dbd00-8546-4082-bf6f-b9bbfa2dcb38>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/news/ncars-534-petaflop-supercomputer-cheyenne-comes-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00004.warc.gz
en
0.948083
984
2.734375
3
“I do not fear computers. I fear lack of them.” — Isaac Asimov. It’s hard to believe, but even just a few years ago, the ability to send a text message seemed unfathomable. After decades of researching, testing, and implementing, mobile phones aren’t just a part of everyday life now—they’re integral. All around the world, smartphones are used to interact with people, schedule appointments, watch movies, play games, look up information, and so much more. Mobile phones are even replacing personal computers both at home and in the workplace. Unfortunately, though these devices are relatively easy to use, they are just as easily hacked into. Whether it’s a high school kid spending 100% of his or her time on the phone or a CEO sending out an important work email, everyone needs to remain vigilant to keep their sensitive information out of the wrong digital hands. First, let’s define malware: Software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system. Mobile phones are computer systems — and they are not as secure. Without a doubt, malware is written to cause harm and can be versatile. Here are some specific types of malware that can cause significant damage to your device’s hardware, software, and your bank account: Additionally, here are some signs of mobile malware attacks that you should keep an eye out for: Although noticing one or two of the signs mentioned above don’t always mean your device is under attack, it’s still important to pay attention and carefully investigate to limit — and possibly reverse — hardware/software damage. A cell phone’s battery life and performance will surely hinder as the years go by, and that’s perfectly normal. But when combined with unexplained charges, pop up ads, and new apps you did not download — you could be facing a severe malware-related problem. That’s when it’s time to act. “We are giving away too much biometric data. If a bad guy wants your biometric data, remember this: he doesn’t need your actual fingerprint, just the data that represents your fingerprint,” said Mike Muscatel, Senior. Information Security Manager at Snyder’s-Lance. “That will be unique, one of a kind.” Thankfully, there are things you can do to protect your personal, professional, and financial information from mobile malware attacks. Here are some great tips for keeping your phones safe and malware free: Mobile phones aren’t going anywhere. As cyber threats continue to evolve, it’s imperative that our cell phones do, as well. Additionally, users need to do everything they can to prevent these malware attacks and regularly monitor both their device and the devices used across their commercial network. To add to that, though organizations are wary of cybersecurity regulations, if done correctly, the digital landscape could be much more secure and harmonious down the line. The next generation of smartphones will likely be context-aware, meaning they’ll take advantage of the growing availability of embedded physical sensors and data exchange abilities. Our mobile phones will soon be able not only better to keep track of our sensitive information but adapt and anticipate incoming threats. The future of mobile usage can be intimidating, yes, but it is also exciting. “This is why I loved technology: if you used it right, it could give you power and privacy.“ — Cory Efram Doctorow.
<urn:uuid:47fd3c5f-4aa3-443a-8619-a3cc963c31ac>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/are-mobile-phones-secure
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00004.warc.gz
en
0.926333
738
2.6875
3
Embedded device security is a topic that many will dismiss, in favor of more popular security concerns. I can understand this, to a certain extent, because mainstream press and information outlets often do not cover embedded security. They are focused on the more common threats, such as the latest Microsoft patches, network worm, Anti-Virus software, and social networking sites that have been hacked. While these are important stories to note, pay attention to, and react defensively, they do not cover all of the threats and associated risk for your organization. Embedded systems are in everyone’s network, and do things like route traffic, block packets, and in the control systems world, if Ethernet enabled, control critical infrastructure. Many believe that since the device is embedded, that no one would bother to attack it or take the time and effort to understand how it works. This is not always the case, in fact, by nature hackers are curious, and sometimes motivated by sheer curiosity (if we’re lucky). There has been new research published on attacking embedded devices that has the potential to be much worse than the latest network work or XSS vulnerability (in my opinion anyhow). The first example comes from Graeme Neilson of Aura Software. Graeme gave a presentation at Ruxcon, a security conference held is Syndey, Australia, titled “Netscreen of the Dead: Developing A Trojaned Firmware for Juniper Netscreen Appliances”. He was able to reverse engineer firmware for the Juniper Netscreen firewalls and insert programs of his own choosing, manipulating the behavior of the firewall. So what, right? If we were to put our “evil hats” on for a few minutes and explore the evil things an attacker can do with this functionality, we might come up with a list like this: 1) Stealthily Modify The Configuration – If you have control of the firmware, you have the ability to load what is referred to by Graeme as “shadow configuration”. This configuration tells the device to behave in a certain way, but leaves no evidence that its happening. In the context of a firewall this means you can allow an external IP address full access through the firewall and/or mirror traffic from an interface to an outgoing data stream. 2) Persistent Infection – Embedded systems have a BIOS which acts similar to the one found in most PCs and servers. If you could infect the bootloader, you can then infect any subsequent firmware with you malicious programs. To put this into context, most will recommend that if you get infected with malware or viruses you should format your hard drive, re-install the operating system from clean media, and re-install all programs (Verifying that they are not trojaned). By infecting the boot loader, you can do all that but still be infected because the bootloader remains the same across OS and firmware upgrades (unless you are upgrading the bootloader itself). An additional feature of compromising an embedded device, would be collecting authentication credentials. This comes in two forms: 1) Usernames & Passwords – If there is one constant that I have found in many different networks, its that users re-use passwords. The sheer number of different passwords we have to keep memorized is frightening, so often people give in and use the same one for the firewall as they do for their domain login (for example). Collecting the usernames and passwords used to access devices could grant access to even more critical information. 2) “Secret” Keys – There are many protocols that require the use of a PSK, or Pre-Shared Key, as an authentication mechanism or encryption key. These can include SNMP, IPSec, RADIUS, and many others. Often many different devices will share the same key, for example maybe the read/write SNMP string is the same for all 10,000 devices in the network. Gaining access to this key now grants you control of every networking device in the environment. The next embedded device security research I would like to highlight comes from none other than Felix “FX” Lindner, a security research with quite a bit of experience hacking embedded systems, and specifically Cisco devices. FX gave a presentation at the recent 25th Chaos Communications Congress titled “Cisco IOS attack and defense The State of the Art”. He has devised a way to exploit Cisco 1700 and 2600 series routers, based on PowerPC chips, in a reliable manner. He used similar techniques as Graeme Neilson and reverse engineered the firmware and crash dumps to create exploits. Of course, you will need a vulnerability in which to exploit to make this work. FX brought up several good points about the security of embedded network devices: 1) There Is No Reverse NAC – Network Access Control (NAC) attempts to verify that the client can connect to the network. There is no technology, to my knowledge, that can verify the network to the client. This means that clients will typically inherently trust the network and devices providing them access. For example, when you connect to a wireless router, do you have a way to verify that the firmware has not been modified to steal all of your credentials? 2) Embedded Systems Have Little OS Security Controls – Most embedded systems do not have a host-based intrusion prevention system, or any way to detect an attack against them. They usually are a shared memory environment with the concept of threads rather than processes. This means that taking over one thread gives you access to the entire operating system. As Windows and UNIX computers become more difficult to attack due to advances in defensive technologies, attackers will continue to shift focus to embedded systems. As technology becomes smaller and just as critical as most servers in your environment, this will be a continuing concern for organizations. While the Zune failure was due to programmer error, its a good example of how catastrophic the failure or compromise of an embedded system could be. So, what can we do to defend our embedded systems? Stay tuned for next week’s blog post 🙂
<urn:uuid:32da6646-862e-4995-8d50-21622ae56024>
CC-MAIN-2022-40
https://dale-peterson.com/2009/01/08/latest-research-on-embedded-system-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00004.warc.gz
en
0.949553
1,344
2.71875
3
With the constantly evolving Internet security threatscape, being able to actually get a grasp on the latest threats, let alone arm oneself against them, can seem overwhelming. While there are seemingly limitless best practices in regard to cybersecurity, below are several that should help reduce the likelihood of becoming a victim of cybercrime. The OODA Loop As stated in previous entries to this series, cybercriminals have typically been on the inside edge when it comes to the race between cybercrime and cybersecurity. One of the strategies that has the potential to change this losing streak is called “OODA” — Observe, Orient, Decide and Act. This acronym was a revolutionary concept created by U.S Air Force Colonel John Boyd in the early 1960s. Colonel Boyd observed that when two adversarial forces are maneuvering, there is a tendency for one side to be constantly outmaneuvered. One side is deciding and acting before the other side can make a move. When one party gets locked into only Observe/Orient and is unable to Decide and Act, they are at the complete and utter mercy of the other party. OODA’s roots go back to the Vietnam War. The challenge was that too many American pilots were becoming casualties of poor air-to-air tactics against the smaller, more agile, and significantly less costly Russian MiG aircraft. When the U.S. Navy instituted TOPGUN to combat the MiG exchange ratio, its educational effort showed dramatic results. The exchange ratio increased nearly three times from just under 4:1 to 13:1, according to Benjamin Lambeth’s The Transformation of American Airpower. It’s not a stretch to take these lessons learned in the air and transfer them to a different kind of battle — the one against cybercrime. First, let’s compare cybercrime and its victims to the scenario Colonel Boyd faced. Cybercriminals are inside corporate OODA loops every time they steal data. They are inside consumer’s OODA loop every time an online scam or phishing attempt works. Cybercriminals are global and often well-organized though their organizations tend to be smaller and more maneuverable than most corporations. Additionally, some criminals are sheltered by certain countries’ policies and laws, or lack thereof. Their thefts fuel their home country’s economy, and they aren’t prosecuted if the crime is beyond the border. Combined, all of these factors allow cybercriminals to gain an advantage and outmaneuver their victims. Like TOPGUN education provided better decision-making skills for Navy pilots, you increase your resistance by becoming more aware of the real-world threats we face. Successful businesses employ OODA loop tactics against their competition. They are quicker off the start and are constantly crushing the market. With cybercrime, that’s where we all want to be, and hopefully some of you are there right now. If you look at where antivirus technology was versus where antivirus tech is today, one can see that the industry has grown and changed tremendously. In the past, there were static signatures which were somewhat easy to defeat over time, and they opened a “window of vulnerability” — the time from when an exploit was discovered to when a signature was created and globally distributed. Following static signatures was the heuristic analysis of applications. In the past, this method had been plagued with a high number of false positives (which can be as time-consuming and disruptive as having real malware on a system). Fast-forward to today: Leveraging active/passive heuristics and static signatures for exceedingly high performance and detection with very low false-positive rates has proven to be a very successful combination. This is the best of both worlds and is able to scale with the ever-increasing prevalence of malware creation and distribution. Even with a technology such as whitelisting, there are pros and cons, and its implementation will have to be evaluated for a particular organization’s model. Whitelisting, while requiring fewer updates than traditional antivirus signatures, requires constant maintenance and querying of an ever-growing database of “allowed” applications, as well as their patches, updates and hotfixes, transferring the burden of analysis from antivirus companies’ malware researchers to system administrators. Once an application is determined to be legitimate, it is allowed to run on the host system. If the application in question is, instead, malicious, then effective (active) heuristic analysis will be able to determine the application’s intentions and flag it as malicious. The Future of Antivirus What we are seeing today is the convergence of several solutions into comprehensive security packages that address multiple security issues — including malware. Security/antivirus has historically been an after-thought in the development of applications and operating systems. Today, application and operating system vendors are taking a more active role in securing their products — but we still have quite a distance to travel. With the amount of mergers and acquisitions over the last few years regarding antivirus vendors, one can clearly watch the antivirus landscape morph into different models and meta-solutions. I see antivirus not as dead or dying, but changing to meet the threat from vectors that were not viable at the beginning of the antivirus industry. While none are a panacea for every cybercrime woe, there are some easy rules to follow to help ensure a good layer of online protection. - Use strong passwords. It’s a lot harder for a criminal to steal your information if they can’t get through the front door. - Keep systems updated and patched. This pertains to applications as well as operating systems and security software. - Become aware that risk from Internet-connected machines will never be 0%. The realistic goal is to reduce the risk to an acceptable level. - If you are sent a link or attachment (via email, instant message and so forth) verify with the sending party. It takes a moment to check — but it may take hours or days to clean an infected system. - Use a residential broadband gateway router between your computer and your broadband provider’s modem to break the direct link the Internet has to your home computer. - Periodically test your backups by restoring them. While most of the above practices can also be applied to business computing, because of the increased amount of people involved (therefore decreased security), there are additional guidelines for businesses: - Simplify security for the end users. The more complex it is, the less inclined users are to using it. - Keep systems updated (patched). This includes applications as well as operating systems. - Partner with the government and academia. - Educate end users, and make this an ongoing process. - Inventory assets. Know what’s on your network. - Use business assets for business only. By doing this in conjunction with an effective policy (and enforcement), the risk level can be reduced dramatically. - Run network audits regularly (log files, anomalous traffic, etc.). - Hire a security firm to help secure your business. With this basic outline in place, next week’s piece, the final one of the series, will look to what resources are available to guide you along the path to a safer online existence. Jeff Debrosse is the North American research director at ESET
<urn:uuid:7dae33f3-b699-4783-81a9-6cb2724caf9a>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/navigating-the-new-cybercrime-threatscape-part-3-68188.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00004.warc.gz
en
0.952522
1,532
2.5625
3
Artificial Intelligence (AI) is redefining industries by offering personalization, automating processes, and disrupting how we work. In modern times, AI is embraced by every industry from healthcare to government. Here are the 10 industries where AI has caused a disruption. copyright by www.analyticsinsight.net The most popular applications of AI in agriculture range from robotics to crop and soil monitoring to predictive analytics. Agriculture majors are developing autonomous robots programmed to handle routine agricultural tasks such as crop harvesting at a higher volume human labourers. AI is deployed in crop and soil monitoring deploying computer vision and deep learning algorithms to process data captured by drones and/or software-based technology to monitor crop and soil health. Predictive Analytics-driven by machine learning models are being developed to track and predict the impact on crop yield faced by erratic weather changes. 2. Call Centres Call centers are witnessing a revolutionary change in the industry with the development of bots and automated messaging, which is often thought to be one of the industries which are at most risk of becoming obsolete in a world of AI. Call centers are an important link between businesses and customers for customer service and product offerings. AI software has been developed to listen to calls and analyze their impact on customer’s buying behavior and shopping experience. Automated calls driven by bots and chatbots may lead to an increase in customer’s loyalty in the future and they may even be programmed to smooth the situation if the customer gets upset. 3. Energy & Mining AI can be deployed into smart electric grids to make them more efficient in delivering energy, and to predict when batteries and other equipment will fail. AI implementation will make the energy exploration an easier and more economical task. AI and its subsets, machine learning, are trends which will disrupt the energy industry, leaving massive opportunities for savings. Business majors like the General Electric is looking ahead to use AI to optimize how electricity flows out of batteries and points of consumption. According to Bloomberg News reports, AI tech into energy and mining could eventually save $200 billion globally. Healthcare is a sector where AI has endless possibilities, from user-friendly bots and chatbots assisting patients in their health diagnosis to robots performing operations with precision. AI is currently used in the healthcare industry to identify high-risk patient groups, predict diseases, increase speed and accuracy of treatment and to automate diagnostic tests. AI has huge potential to improve drug formulations, predictive care, and DNA analysis that has the power to positively impact the quality of healthcare and affect human lives. 5. Intellectual Property In the global innovation economy driven by AI and allied technologies, demand for intellectual property (IP) titles, patents, trademarks, industrial designs; copyright is rapidly increasing and becoming more complex. AI, big data analytics and new technologies such as blockchain have huge potential to address the growing challenges facing IP offices. Copyright is an important IP asset for AI; AI protects the technology product (code and data) from unauthorized use and reproduction through digital locks. An IP strategy for AI systems will layer IP rights to protect different aspects of information and business enterprises can clearly define and protect their IP with registrations and digital documentation.
<urn:uuid:16bc6455-7d64-40a6-bf82-11e06d2249c5>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/09/03/10-industries-where-artificial-intelligence-has-caused-a-disruption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00004.warc.gz
en
0.943896
648
3.03125
3
Security experts estimate that Conficker, a particularly malicious worm, targeting MS Windows, has already infected more than 7 million computers around the world. Conficker was, without doubt, the most significant piece of malware active throughout 2009, not just because of the media attention it attracted or the number of computers infected worldwide, but also because it represented a leap back in time to the era of massive virus epidemics. More than a year has passed since Conficker first appeared, yet it is still making the news. The patch for the vulnerability exploited by Conficker was published by Microsoft in October 2008. Yet more than one year later, Conficker continues to infect computers using many advanced malware techniques and exploiting the Windows MS08-067 service vulnerability. The spread of Conficker impacted all types of institutions and organisations. Victims included the British military, Ealing Council’s entire IT network was disabled for 4 days, and 800 computers from the Sheffield NHS Trust were infected as well as numerous other companies and organisations worldwide. Microsoft even offered a reward of $250,000 to anyone providing information that led to the arrest and conviction of the creators of this malware. The Conficker worm, which by nature is a particularly damaging strain of virus, appears to be launching brute force attacks to extract passwords from computers and corporate networks. The easier the password, the easier it is for Conficker to decipher it. Once the passwords are detected, cyber criminals can then access computers and use them for their own ends. So why is this still happening? Principally, because of its ability to propagate through USB devices. Removable drives have become a major channel for the spread of malicious code, due to the increasing use of memory sticks and portable hard drives to share information in households and businesses. After inserting an infected USB into an unpatched machine Conficker will be able to bypass the computer security and, by impersonating an administration account, drop a file on the computer system. It will also try to add a scheduled task to run those files. Another reason for the longevity of this worm is that many people are using pirated copies of Windows and, in fear of being detected; they avoid applying the security patches published periodically by Microsoft. In fact, Microsoft allows unrestricted application of critical updates, even on non-legitimate copies of its operating system. Nowadays, most companies have perimeter protection (firewall, etc.), but this does not prevent employees from taking their memory sticks to work, connecting them to the workstation and spreading the malicious code across the network. As this worm can affect all types of USB devices, MP3 players, mobile phones, cameras, and other removable devices are also at risk. So what can users do to mitigate this threat? Users should firstly apply the patch to solve the security issue that lets the Conficker worm spread through the Internet (MS08-067); they then need other solutions such as a USB vaccine protecting not just the computer but also the USB device itself. A security solution which is regularly updated and active should be enough to protect against Conficker and its variants but organisations should alsohabitually scan for vulnerable machines, disinfect infected machines using updated and active antivirus both on networks and stand-alone PCs and make sure their antivirus and security solutions are up-to-date on the latest version and signature database. It is important to note that by just asking people to use a security solution, we should not expect to put a halt to the problem. Making users aware of the threats, teaching children at school how to use technology safely and responsively, and ensuring they have privacy in mind are equally important. Many users are unaware of the dangers, and living under the perception that the digital world is secure, and as we know, that is not the case.Preventative measures must also come from the ‘top-down’, legislating, chasing and punishing those that benefit from cybercrime and protecting critical infrastructure. Panda Security is exhibiting at Infosecurity Europe 2010, the No. 1 industry event in Europe held on 27th – 29th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk
<urn:uuid:7228ea12-c9b9-493c-892c-d7c1e30b8a27>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/security/the-conficker-conundrum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00004.warc.gz
en
0.951259
885
3.0625
3
Imaging studies reveal overactivation of subcortical brain structures in response to direct gaze. Individuals with autism spectrum disorder (ASD) often find it difficult to look others in the eyes. This avoidance has typically been interpreted as a sign of social and personal indifference, but reports from people with autism suggests otherwise. Many say that looking others in the eye is uncomfortable or stressful for them – some will even say that “it burns” – all of which points to a neurological cause. Now, a team of investigators based at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital has shed light on the brain mechanisms involved in this behavior. They reported their findings in a Scientific Reports paper published online this month. “The findings demonstrate that, contrary to what has been thought, the apparent lack of interpersonal interest among people with autism is not due to a lack of concern,” says Nouchine Hadjikhani, MD, PhD, director of neurolimbic research in the Martinos Center and corresponding author of the new study. “Rather, our results show that this behavior is a way to decrease an unpleasant excessive arousal stemming from overactivation in a particular part of the brain.” The key to this research lies in the brain’s subcortical system, which is responsible for the natural orientation toward faces seen in newborns and is important later for emotion perception. The subcortical system can be specifically activated by eye contact, and previous work by Hadjikhani and colleagues revealed that, among those with autism, it was oversensitive to effects elicited by direct gaze and emotional expression. In the present study, she took that observation further, asking what happens when those with autism are compelled to look in the eyes of faces conveying different emotions. Using functional magnetic resonance imaging (fMRI), Hadjikhani and colleagues measured differences in activation within the face-processing components of the subcortical system in people with autism and in control participants as they viewed faces either freely or when constrained to viewing the eye-region. While activation of these structures was similar for both groups exhibited during free viewing, overactivation was observed in participants with autism when concentrating on the eye-region. This was especially true with fearful faces, though similar effects were observed when viewing happy, angry and neutral faces. The findings of the study support the hypothesis of an imbalance between the brain’s excitatory and inhibitory signaling networks in autism – excitatory refers to neurotransmitters that stimulate the brain, while inhibitory refers to those that calm it and provide equilibrium. Such an imbalance, likely the result of diverse genetic and environmental causes, can strengthen excitatory signaling in the subcortical circuitry involved in face perception. This in turn can result in an abnormal reaction to eye contact, an aversion to direct gaze and consequently abnormal development of the social brain. In revealing the underlying reasons for eye-avoidance, the study also suggests more effective ways of engaging individuals with autism. “The findings indicate that forcing children with autism to look into someone’s eyes in behavioral therapy may create a lot of anxiety for them,” says Hadjikhani, an associate professor of Radiology at Harvard Medical School. “An approach involving slow habituation to eye contact may help them overcome this overreaction and be able to handle eye contact in the long run, thereby avoiding the cascading effects that this eye-avoidance has on the development of the social brain.” The researchers are already planning to follow up the research. Hadjikhani is now seeking funding for a study that will use magnetoencephalography (MEG) together with eye-tracking and other behavioral tests to probe more deeply the relationship between the subcortical system and eye contact avoidance in autism. Other resouce : Source: Terri Ogan – Mass General Original Research: Full open access research for “Look me in the eyes: constraining gaze in the eye-region provokes abnormally high subcortical activation in autism” by Nouchine Hadjikhani, Jakob Åsberg Johnels, Nicole R. Zürcher, Amandine Lassalle, Quentin Guillon, Loyse Hippolyte, Eva Billstedt, Noreen Ward, Eric Lemonnier & Christopher Gillberg in Scientific Reports. Published online June 9 2017 doi:10.1038/s41598-017-03378-5
<urn:uuid:bb3497ba-0dfb-461f-8fed-49456b93d319>
CC-MAIN-2022-40
https://debuglies.com/2017/06/17/researchers-explore-why-those-with-autism-avoid-eye-contact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00004.warc.gz
en
0.935387
946
3.109375
3
A team of researchers affiliated with several institutions in the U.S. has identified circadian rhythm patterns in human skin based on genetic biomarkers. In their paper published in Proceedings of the National Academy of Sciences, the group describes how they obtained skin samples from multiple volunteers over time, and what they discovered after conducting a genetic analysis. Most people have heard of the circadian rhythm—the internal clock that regulates sleepiness and wakefulness. It generally oscillates over an approximate 24-hour period. Researchers have been studying the circadian clock because certain medicines work better or worse during different parts of the cycle. It has also been found that there are optimal times during the cycle for carrying out surgical procedures. In this new effort, the researchers have found a way to track the circadian rhythm in people by taking skin samples every few hours and looking at gene expression markers. To find out if the skin could be used to map the circadian rhythm of a given individual, the researchers collected skin samples every six hours from 19 volunteers over a 24-hour period. Each of the samples was tested for gene expression markers. They found 110 genes whose expression varied in rhythmic patterns throughout the day. They noted that the rhythm patterns followed a bimodal distribution where peaks occurred in the morning and the evening. The researchers then collected skin samples from 219 volunteers just once at random points throughout a given day. They compared gene expressions in those samples to the ones they had studied earlier, and found a correlation between the two groups. Next, they used a tool called CYCLOPS to rebuild the temporal order of the new samples. They identified 188 genes that expressed rhythmically in the skin. The researchers conducted a similar experiment on mice and found similarities with human volunteers. The researchers suggest their findings indicate that skin sampling could be used to develop biomarkers for mapping out the circadian clock in individuals—a much less cumbersome process than the current standard, the dim-light melatonin-onset assay. More information: Gang Wu et al. Population-level rhythms in human skin with implications for circadian medicine, Proceedings of the National Academy of Sciences (2018). DOI: 10.1073/pnas.1809442115 , https://www.biorxiv.org/content/early/2018/04/16/301820 Journal reference: Proceedings of the National Academy of Sciences Provided by: Science X Network
<urn:uuid:d342461d-aa9d-45c0-937c-2b321125835e>
CC-MAIN-2022-40
https://debuglies.com/2018/11/01/researchers-has-identified-circadian-rhythm-patterns-in-human-skin-based-on-genetic-biomarkers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00004.warc.gz
en
0.93078
496
4.03125
4
Researchers at the University of Virginia School of Medicine have identified an unexpected contributor to rheumatoid arthritis that may help explain the painful flare-ups associated with the disease. The discovery points to a potential new treatment for the autoimmune disorder and may also allow the use of a simple blood test to detect people at elevated risk for developing the condition. The promising discovery is among the first to emerge from the School of Medicine’s new affiliation with Inova Health, a collaboration that aims to make medical breakthroughs and advance the battle against disease. In this case, the arthritis discovery originated in the lab of UVA School of Medicine’s Kodi Ravichandran and was facilitated by combining his team’s resources and expertise with that of Inova researcher Thomas Conrads through a THRIV UVA-Inova seed grant. Understanding Rheumatoid Arthritis The new findings about rheumatoid arthritis came in an unexpected fashion. Sanja Arandjelovic, a research scientist in the Ravichandran group, was seeking to better understand what causes the inflammation associated with inflammatory arthritis when she noted that deleting a gene called ELMO1 alleviated arthritis symptoms in mice. This was particularly surprising because Arandjelovic and Ravichandran initially thought that loss of ELMO1 would result in increased inflammation. “This was a complete surprise to us initially,” recalled Ravichandran, chairman of UVA’s Department of Microbiology, Immunology and Cancer Biology. “I love those kinds of results, because they tell us that, first, we did not fully comprehend the scientific problem when we began exploring it, and, second, such unexpected results challenge us to think in a different way. Given that rheumatoid arthritis affects millions of people worldwide, we felt the need to understand this observation better.” Digging deeper into the unusual outcome, the researchers determined that ELMO1 promotes inflammation via their function in white blood cells called neutrophils. Ravichandran described neutrophils as the body’s “first line of defense” because they sense and respond to potential threats. “Normally they are good for us, against many bacterial infections,” he said. “But also, there are many times when they produce a lot of friendly fire that is quite damaging to the tissues – when they hang around too long or there are too many neutrophils coming in – in this case, infiltrating the joints during arthritis.” The researchers also discovered that there is a natural variation in the ELMO1 gene that can prompt neutrophils to become more mobile and have the potential to invade the joints in greater numbers and induce inflammation. (The potential blood test would detect this variation.) Here things take a particularly cool turn: Normally, doctors are reluctant to try to block the effect of genes like ELMO1 in people, because such genes can play diverse roles in the body. But Ravichandran believes that ELMO1 is different. “ELMO1 partners with very specific set of proteins only in the neutrophils but not in other cell types we tested,” he said. “So presumably, you may be able to affect only a select cell type.” This latter result came about from a collaborative study where Conrads’ group at Inova performed sophisticated analysis of ELMO1 proteomic partners in neutrophils, many of which also have previously known links to human arthritis. This provided further validation for the role of ELMO1 in rheumatoid arthritis. Encouragingly, blocking ELMO1 in lab mice alleviated arthritis inflammation without causing other problems, Ravichandran noted. His laboratory is now seeking to identify drugs that could inhibit the function of ELMO1 and is also designing a test for the variation (also called polymorphism) in the ELMO1 gene. “This is another example of how fundamental basic research can lead to novel discoveries on clinically relevant problems that affect a large number of people,” Ravichandran said. The researchers have published their findings in the scientific journal Nature Immunology. More information: Sanja Arandjelovic et al. A noncanonical role for the engulfment gene ELMO1 in neutrophils that promotes inflammatory arthritis, Nature Immunology (2019). DOI: 10.1038/s41590-018-0293-x Provided by University of Virginia
<urn:uuid:9d479380-7d01-4864-9fd7-0965164cbb9c>
CC-MAIN-2022-40
https://debuglies.com/2019/02/09/a-gene-called-elmo1-alleviated-rheumatoid-arthritis-symptoms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00004.warc.gz
en
0.938555
939
2.890625
3
Computers are one of the greatest inventions of our time, but they are also one of the most frustrating when they aren’t working the way you need them to. One of the most common reasons computers start to misbehave is because of a virus. So how do you know if your computer has a virus? These are four of the most common signs. If you experience one of these, you should contact a certified Nerd to remove it, and our computer repair shop sure is a great place to find one! Computer Takes a Long Time to Start and/or Run Applications Most viruses make your computer run slowly. If you are completing basic tasks and you have to wait for what feels like forever, it is a good sign your computer has a virus. A healthy computer shouldn’t make you wait. Random Ads or Messages on the Screen Spyware is a popular type of virus that is designed to steal your sensitive data without you even knowing about it. It does this by way of pop-ups that appear to be ads or important messages, but when you click on them they will ask for more information from you. The worst part is, this type of virus rarely works alone—if you are seeing this, there are probably even more destructive side effects lurking in the background. You Keep Being Told You Are Running Out of Disk Space The message seems simple enough, and you do a lot on your computer, so running out of space doesn’t seem that unordinary. However, it is important to know that many viruses work by filling up your disk space, which causes your computer to crash. If you see the message “You are running out of disk space,” and you haven’t done anything unusual lately, then you might want to get your computer checked. There Are Changes on Your Computer You Didn’t Make Whether it’s a new toolbar showing up, an updated home screen, or extra additions on your desktop, if anything changes on your computer and you didn’t make the changes, then there is a good probability you have a virus. While viruses are certainly annoying, the good news is, if they are caught early enough, they can usually be rectified without serious repercussions. Now that we’ve given you some Nerdy tips indicate your computer may be infected, you can get the professional help you need. If you recognize one of the signs your computer has a virus or you just aren’t sure what’s going on, contact NerdsToGo today.
<urn:uuid:c0953ac4-bab9-4271-98dd-4a1c2759368c>
CC-MAIN-2022-40
https://www.nerdstogo.com/blog/2018/may/4-common-signs-your-computer-has-a-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00204.warc.gz
en
0.94753
525
2.640625
3
What does single pair Ethernet bring to the IIoT? Networking in the IIoT The IEEE standard for single pair Ethernet (SPE) was born in the automotive sector to reduce the weight of cables in a vehicle. It is now becoming hugely popular in the industrial sector, where cable weight is far from the most important feature. According to George Zimmerman, chair of the Ethernet task force, describing single pair Ethernet isn’t difficult. Explaining why it is important is a little more complex. The name really says it all: SPE is Ethernet over a single pair of twisted conductors. All other Ethernet connections have multiple pairs of conductors and the reason for that is largely legacy. Running Ethernet over a single pair isn’t technically new, but using cables with just one pair is new, and it is significant for many reasons. It really comes down to sending and receiving simultaneously. In the very early days, it was difficult to do that over a single pair, so Ethernet used two pairs, one to transmit and one to receive. Very high-speed Ethernet still makes use of those multiple pairs to split the bandwidth between them, not to transmit/receive separately. Echo cancellation was the signal processing breakthrough that enabled simultaneous transmit/receive over the same conductors. Moore’s Law enabled echo cancellation to be integrated in a smaller area using lower power, and that really makes it viable. It isn’t that the task has become more complicated, it is just that we can now fit the processing power needed to do it into a smaller area. In fact, according to Zimmerman, echo cancellation is probably simpler now than in those early days. That is because the conductors, connectors and other electromechanical elements are of higher quality. The signal-to-noise ratio has improved. That makes the task of echo cancellation a little simpler, but that is perhaps supplementary to the real task. The point is that SPE isn’t significant because it is enabled by a new technology breakthrough; it is significant because it breaks the mold. It means networks can no longer assume there will be multiple pairs available. In the industrial domain, it is significant because OEMs already prefer single pair connections. Often, they will be running other, sometimes proprietary protocols. Now, they can run Ethernet over those same connections, and that really is a game changer. Time sensitive networking Ethernet also is used for industrial control factors in time sensitive networking (TSN). George Zimmerman is also chair of aN effort looking at the toolset for TSN that is part of IEEE 802.3. This includes the new 10BASE-T1 PHYs for TSN and looks at the future for long-reach, point-to-point single pair Ethernet. The driver here is the Industrial IoT (IIoT), closely tied in with time sensitivity. This is specifically relevant when looking at aspects such as latency in servo motor control. While TSN is also important in automotive applications, the weight savings made by moving to SPE in the industrial world isn’t the biggest draw. In fact, in a vehicle, the length of the SPE cable – or reach – is relatively short, around 15 meters using a thin gauge wire. In an Operational Technology (OT) application, the reach could be a kilometer and the gauge much thicker – as much as a millimeter in cross section. This cross section is not that different from a multiple pair cable using a smaller gauge. The practical gains come from only having to connect two pairs, rather than multiple pairs. This makes installation and maintenance simpler. The performance gains come from being able to run anything that is designed for Ethernet over that single pair. This could remove the need for industrial gateways designed to take other protocols and convert them into a frame that can be sent over Ethernet for the backhaul. It simplifies the entire network by removing the complexity of handling multiple protocols. For OEMs, this means they can standardize on a technology that is familiar to many more engineers: Ethernet. By reducing the complexity of the underlying transport layer, those engineers can focus on the aspects that differentiate and add value, such as implementing TSN. Networking in the IIoT Another important element driving the adoption of SPE is its ability to support shared access through multidrop technology. This is a departure from traditional point-to-point connections and, as Zimmerman pointed out, something Ethernet was originally intended to support. It does this through a technology called carrier sense multiple access with collision detect (CSMACD). “10 Mbit single pair Ethernet has a version we call multidrop, which is a return to shared media connectivity,” Zimmerman said. The technology was developed to replace controller area networks in vehicles. It is now being used in the industrial world as a backplane technology to connect equipment. It simplifies the implementation of networking in an OT environment, making it both easier and cheaper thanks to the large ecosystem that already exists for Ethernet. Power over SPE Another important development involves delivering power with data over the same conductors. The IEEE 802.3cg specification supports 10 Mb/s for OT, while single pair power over Ethernet (SPoE) provides between 1.23W and 52W over distances of up to 1km. “Power over wired Ethernet is hugely important. That’s not just SPoE, or single pair power over Ethernet, but also PoDL or power over data lines. It’s an enabler,” Zimmerman said. With so much focus on low-power applications and the use of battery or even alternative renewable sources of energy, adding power to data is really what will keep wired connectivity in the engineer’s toolbox. Zimmerman, who is an independent consultant on high-performance communications technology and solutions, specializing in wireline communications, added: “In the past five to 10 years, around 40% of my work has been on power technologies and power over Ethernet.” In this respect, the standard offers a lot of flexibility to ensure enough power can be delivered over the single pair. “The standard specifies the maximum loop resistance,” Zimmerman said. This means the wire gauge can be selected to overcome any voltage drops incurred because of resistance, in relation to the power being delivered. “The improvement we’re seeing in energy efficiency for end devices is driving an explosion in the number of things you can power over these network wires, all at the same time.” Zimmerman went on to explain that we are unlikely to see single pair and multipair Ethernet in the same transceiver, at least for a while. “They are designed to be two parallel ecosystems.” It is unlikely that the user would want to move between them from a physical point of view too often, but you will be able to transfer data over everything, because it’s all Ethernet. End-users are unlikely to replace multipair Ethernet cabling with single pair without good reason. But the Industrial IoT is still developing, making SPE a strong contender for the position of dominant wired standard going forward. Zimmerman explained one of the reasons for this. In the OT world, there are already networks using slower protocols to carry Ethernet data, but they are not really Ethernet networks. They often have specific requirements, such as safety in process control environments. SPE can meet these requirements. “We did a lot of work designing 10BASE T1L so that it would be compatible with the voltage levels required for intrinsic safety in process control environments,” Zimmerman said. While SPE is not likely to displace any existing technologies without good reason, it will be the only technology suitable for some emerging applications, particularly in the IIoT. That, alone, will guarantee its success.
<urn:uuid:a5a39727-bbef-4dfb-bddc-204bfc6b9627>
CC-MAIN-2022-40
https://www.iotplaybook.com/article/what-does-single-pair-ethernet-bring-iiot
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00204.warc.gz
en
0.946045
1,623
3.078125
3
Active learning focuses on how students learn, rather than just what they learn. By engaging students in the learning process, universities can reduce failure rates (opens in new tab) across STEM subjects, as well as upskilling our future workforces. Here we will take a look into how integrating active learning strategies into education course structure can offer a range of benefits. From encouraging student engagement and arming students with transferable skills that will be vital to their future careers. The benefits of active learning Studies have shown that active learning in classes can improve exam results (opens in new tab) by around 6%. Furthermore, students that learn through traditional lectures are 1.5 times more likely to fail compared to those who have participated in active learning. That said, the expectation of the next generation entering the workplace not only focuses on educational results, but also the transferable skills acquired during their time at University. Utilising technology is key to encouraging active learning, and preparing students for the workplace. Here we look at five active learning strategies which can develop core skills such as effective communication, collaborative working, problem solving, data analysis and research skills. Investing in virtual laboratories Virtual laboratories give students the opportunity to apply course concepts to new situations and contexts, as well as develop data analysis skills. Research has found that combining the use of the virtual laboratory with teacher-led learning, increased learning effectiveness (opens in new tab) by 101%. Virtual laboratories are in use at world-class universities — Harvard, MIT, University of Hong Kong, and Stanford are already using lab simulators to support open-ended investigations. The technology gives students the opportunity to perform experiments that may be too dangerous or impossible to perform in a real lab. It can also save an institution money, as students can practice experiments before using real resources. Opening up more opportunities for all students to experiment with fewer limitations in turn improves experimental and analytical skills. Using interactive projectors effectively Interactive projectors and interactive flat panel displays (IFPD) have been found to have a positive impact on students' motivation, engagement and self-esteem. By using an IFPD in an interactive way (opens in new tab), rather than just as a way of transmitting information to students, institutions will achieve the best results. An IFPD is the ideal tool for research or used as a central hub for brainstorming. Asking students to work collaboratively during seminars to complete games or tasks related to their subject of study will facilitate deep learning and promote knowledge retention. Students can also put together their own lessons or presentations and lead a lecture. Embedding video, images, links to further reading or tools enhances peer-to-peer learning. This approach encourages real-life research skills and prepares students for the challenges of professional life. Employing the Think-Pair-Share approach Think-Pair-Share (opens in new tab) is used at the University of Berkeley to activate students’ prior knowledge and encourage them to share what they know with their peers. Working in this way helps students to help organise their own ideas in their own minds first, before sharing with others in a group. How it works: Ask students to Think individually about the question or idea that they’d like to propose. Then, Pair them up with someone to discuss their thinking. Finally, give them the opportunity to Share their conversation and debate within small groups before presenting to the wider group. When facilitating a whole group discussion, ask students to expand their thinking by supporting their thoughts with evidence or further explanation. Ask questions that encourage students to think about the wider context of the subjects such as: what makes you think that? Can you give me an example? Not only does this strategy encourage collaborative working and analytic thinking – both essential transferable skills for a range of careers; it also gives student confidence in presenting. Using case studies Setting students a real-life, current issue to tackle encourages them to explore case studies and relevant sources to find a solution. The research skills they will develop are essential in a range of career roles. What’s more, by using a real-life scenario students get into the headspace of an employee in that field. Asking students to work in small research groups before presenting their findings to the rest of the group. Working in this way bridges the gap between theoretical concepts and practice, as well as promoting active learning. Students will also develop key skills such as effective communication, group working, presentation skills and problem-solving. Research has shown that learning through the use of case studies increases student motivation (opens in new tab) and desire to expand their knowledge in their subject area. Making lectures interactive Reflect back on your time at university and you will likely remember lectures as dull hours spent being talked at. By breaking up lectures with interactive elements, lecturers can drastically improve the learning experience. Best of all, this can be achieved with minimal technology. Many universities have clickers installed in their lecture theatres, lecturers can make best use of these by posing multiple-choice questions to the class. University College London (UCL) is one such institution making good use of interactive technology in lectures by using free apps, such as Socrative. Socrative lets students record their answers on smartphones or tablets. The results can then be shared with the class. A competitive advantage For recent graduates job competition is strong so arming them with strong transferable skills will give them a competitive advantage. Graduates fresh from university and armed with relevant skills on the job spec checklist is win-win for both parties. For an entry level role, the high level of candidates applying for an opening is a great opportunity for businesses to secure the best possible graduates. Katy Crouch, Marketing Executive at Selesti (opens in new tab) Image Credit: Brooke Cagel / Unsplash
<urn:uuid:845293ce-5e3d-47bc-a9db-227fa2d4a823>
CC-MAIN-2022-40
https://www.itproportal.com/features/5-strategies-to-encourage-active-learning-and-upskill-future-workforces/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00204.warc.gz
en
0.946554
1,201
3.5
4
A particular location that a vulnerability could be found, such as an IP address, a web server, or a source code file. A container for related data and projects. A business unit can represent a company, a department or business unit, or something as specific as an individual application or network. A list of items that must be followed throughout the course of a project. The association of findings belonging to a specific vulnerability to a Resolve master finding. Common Platform Enumerations For more information, see https://nvd.nist.gov/products/cpe. Common Vulnerabilities and Exposures. For more information, see https://cve.mitre.org/. Common Vulnerability Scoring System. For more information, see https://www.first.org/cvss/. Common Weakness Enumeration. For more information, see https://cwe.mitre.org/. A container for data imported from a scanning or testing tool. A file related to a project, such as a report or scope information. The act of taking advantage of a vulnerability. A single occurrence of a detected vulnerability on a particular asset. |Global Instance||The first published instance in a set of duplicates.| |Duplicate Instance||An instance that has already been discovered before, paired with a Global Instance.| An area in a Resolve workspace that contains an organized list of findings. A construct used by Resolve to link a finding to a master finding. A container for instances belonging to a particular combination of asset and master finding. An instance created manually instead of automatically imported from scan data. A generic vulnerability write-up that crosses all workspaces, projects, and organizations. A master finding contains all of the relevant information about a vulnerability without being specific to any asset or environment. Master finding variation A component of a master finding that determines the information associated with a finding, such as the vulnerability description, business impact, instructions, and references. National Institute of Standards and Technology. For more information, see https://www.nist.gov/. National Vulnerability Database. For more information, see https://nvd.nist.gov/. A container for data and information related to penetration tests and vulnerability scans. This includes data sources, assets, checklists, documents, and workspaces. Open Web Application Security Project. For more information, see https://www.owasp.org. A list of questions used to identify key information about the project, such as what needs to be scanned or tested. The potential loss or damage resulting from an vulnerability being exploited. The intent to cause harm or damage to an asset. A confirmation of a vulnerability fix. See Master finding variation. Evidence that a vulnerability exists on an asset as described by a reported instance. A security flaw found on an asset. A data container to review, manage, and update findings.
<urn:uuid:a0cacbbc-5f6d-4fca-90e3-b7938891f397>
CC-MAIN-2022-40
https://helpdesk.netspi.com/support/solutions/articles/64000118009-glossary
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00204.warc.gz
en
0.844198
711
2.578125
3
It was a long journey, the IETF has been analyzing proposals for TLS 1.3 since April 2014, the final release is the result of the work on 28 drafts. The TLS protocol was designed to allow client/server applications to communicate over the Internet in a secure way preventing message forgery, eavesdropping, and tampering. Below the description of one of the most important changes introduced with TLS 1.3: TLS 1.3 deprecates old cryptographic algorithms entirely, this is the best way to prevent the exploiting of vulnerabilities that affect the protocol and that can be mitigated only when users implement a correct configuration. In the last few years, researchers discovered several critical issues in the protocol that have been exploited in attacks. In February, the OpenSSL Project announced support for TLS 1.3 when it unveiled OpenSSL 1.1.1, which is currently in alpha. One of the most debated problems when dealing with TLS is the role of so-called middleboxes, many companies need to inspect the traffic for security purposes and TLS 1.3 makes it very hard. “The reductive answer to why TLS 1.3 hasn’t been deployed yet is middleboxes: network appliances designed to monitor and sometimes intercept HTTPS traffic inside corporate environments and mobile networks. Some of these middleboxes implemented TLS 1.2 incorrectly and now that’s blocking browsers from releasing TLS 1.3. However, simply blaming network appliance vendors would be disingenuous.” reads a blog post published by Cloudflare in December that explained the difficulties of mass deploying for the TLS 1.3. According to the tests conducted by the IETF working group in December 2017, there was around a 3.25 percent failure rate of TLS 1.3 client connections. (Security Affairs – TLS 1.3, hacking)
<urn:uuid:cfc0b239-c2cd-43bf-9f3b-17fe44ca3909>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/70666/security/tls-1-3.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00204.warc.gz
en
0.947104
375
2.84375
3
IBM plans to spend $300 million this year to build 13 “cloud computing” data centers where businesses can store information for quick retrieval in case their computer systems are destroyed in a disaster. Cloud computing refers to services accessed via the Web that seem to exist in a cloud over the Internet. The computing giant, which will unveil the plan on Wednesday, is building the sites in 10 countries, including IBM has so far rolled out the cloud-computing data recovery technology to fewer than five of its 154 existing data centers, the oldest of which was built more than 40 years ago. The technology encrypts data on computers, automatically sending it to IBM’s cloud computing center over the Internet. If a customer’s computer breaks down or a data center is destroyed, lost data can be restored via the Web in two to six hours, IBM Vice President Mike Riegel said in an interview. Older technology known as “data mirroring” is far more expensive than cloud-computing technology and relies on two sets of data in two locations. But it also allows for systems to be restored in less than an hour, he said. IBM’s rivals in this area include HP and privately held SunGard. (Reporting by Jim Finkle. Editing by Braden Reddall.)
<urn:uuid:df5137ed-8af4-42bf-8a5e-0a9acbbff308>
CC-MAIN-2022-40
https://cioupdate.com/ibm-invests-300m-in-disaster-recovery-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00404.warc.gz
en
0.949948
270
2.546875
3
Crypto-malware is a malware infection that makes a hacker able to perform the cryptojacking campaign. It permits the hacker to use someone else’s server or computer for cryptocurrencies. CryptoLocker is the best example of crypto-malware. It is also ransomware spread by email attachments. The ransomware searches and encrypts the essential files and data on the infected computer. Cryptomining, Cryptojacking, and Other Crypto-malware Terms Explained Making a cryptocurrency unit or a way to produce cryptocurrencies is called crypto mining. But it is one step more than creating coins, a process of crypto coin affairs is endorsed. This action is legal and is honored by payment through cryptocurrency. Cryptocurrency, often known as crypto, is any currency that lies digitally or practically and uses cryptography to save transactions. Like money, cryptocurrency is decentralized and encrypted, meaning it is not changed or no authority is here to manage it. The most popular currency is bitcoin. To mine cryptocurrency with unauthorised use of organizations and computing resources is a kind of cybercrime. However, its objective is also profit. But it is entirely secretive from the victim. Cryptocurrencies work with a distributed database called ‘blockchain‘ to operate. They routinely update blockchain with information about the whole transactions that have occurred since the last update. The hackers attach every set of recent transactions with a ‘block’ using a complex mathematical procedure. The importance of cryptocurrency is increasing and some major companies now accept the digital coin. Similarly, crypto-malware attacks are on the rise among cyber-criminals. Crypto-malware attacks are on the rise day by day. Most industries are at risk of it. Also, it is one of the fastest-growing cyber security threats. Crypto-malware is one of the fastest-growing threats in recent history regarding cyber currency. We would have more chances to see the rapid growth in the crypto-malware attackers in the future if the cryptocurrencies kept increasing. Crypto-malware is a form of a harmful process that hides files saved on a computer or device for the greed of getting money. The hiding of files results in the disarranging of the data in the files so that it is unreadable. The concealment of files results in the disarranging of the data in the files so that it is unreadable. For the unscrambling of files, hackers use decryption keys . Once you download it into the system, the crypto-malware is settles down in different applications and files. When the victim reaches the specified file, this malicious code will run in the background and mine for the currency. The easiest way of spreading malware is through ads and websites. The victim visits a website that has a crypto-malware infection. The code then transfers to the victim’s device. Finding out the code is complex because it does not settle on the computer but in the browser. Crypto-Malware Attacks vs. Ransomware Attacks Cyber – criminals design both attacks for the same purpose to extort money from the victims. But the methods are different. - It encrypts the whole data on your computer and holds this data for ransom. - This attack encrypts the data of the victim until pay the attacker - Attackers demand money directly. Related: What Is Ransomware? - Crypto-malware is a harmful program that encrypts files secured on a computer or mobile device to extort money. - However, this is a secret crime and works in the background of the user system. - The attackers continue to mine cryptocurrency by using the victim’s device. The crypto-malware does not steal data clearly, It crucially slows down the victim’s system and its computing power. In this way, the victim cannot be able to do multiple tasks simultaneously. Crypto-malware attacks have had a tremendous social impact regarding direct financial damages paid to cybercriminals and loss of profit regarding recovery costs, and loss of production because of downtime. Crypto-malware attacks are increasing day by day. Also, this is a fact that finding the malware is difficult. The defence against them makes it more difficult. It includes; - Avoid clicking on the unrelated links. - To avoid affected emails reaching your inbox, use the spam filter. - Only access the URLs that start from HTTPS. - Install cybersecurity software that will find many threats and restrains viruses before attacking your device. - Always keep a backup of everything. Also, this ensures that you can delete the whole data store and work with the backup whenever a ransomware compromises your essential data . The Organizations Should Take The Following Steps To Prevent The Attack - To check the patterns connected with attacks, use machine learning in affiliation with anomaly detection, including reduced processing speeds to improve the security posture. - Ensure that the multifactor authentication solutions, VPNs, and remote services are entirely healed and correctly designed and separated to find the harmful activity, including DMARC (domain-based message authentication reporting and conformance), DKIM (domain keys identified mail), and SPF (sender policy framework) failures. - Use malware indicators while sending and receiving any message and email. - Teach about the malware attacks to the employees so that they are aware of its dangers and risks. Finally, we learned about What is crypto-malware? Every person should know about the crypto-malware attacks, and the organizations should manage a robot campaign to teach the people about the harmful infections.
<urn:uuid:b0bb9569-9e5f-4873-8ed3-957407fc69b6>
CC-MAIN-2022-40
https://itechwares.com/blog/what-is-crypto-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00404.warc.gz
en
0.916436
1,158
2.90625
3
Cyberattackers gain access to their victims’ networks by exploiting initial vectors – entry points that enable them to drop malicious software (malware). Securing the most common cyberattack initial vectors is important in protecting your organization’s network. Here are the most common cyberattack initial vectors and their corresponding cybersecurity best practices in securing them: RDP, short for Remote Desktop Protocol, is one of the most popular application-level protocols for accessing Windows workstations or Windows servers. With the spread of the coronavirus disease 2019 (COVID-19) and the resulting government-mandated stay-at-home measures, remote working has become a new normal. This new normal, however, directly impact cybersecurity. The ransomware called “Phobos”, for instance, typically leverage compromised RDP connections as an initial vector. Kaspersky Lab reported that since the beginning of March of this year, the number of RDP brute force attacks has skyrocketed across almost the entire planet. In a brute force attack, an attacker uses the trial-and-error method of guessing the correct username and password combination. Attackers are able to launch RDP brute force attacks as this protocol is often left exposed to the internet with username and password combination as the only means of protection. Successful RDP brute force attack allows an attacker access to an entire network, which can be used for malicious activities such as stealing data or spreading malware. McAfee Labs reported that the number of internet-exposed RDP jumped from nearly three million in January 2020 to more than four and a half million in March 2020. According to McAfee Labs, weak passwords remain one of the common points of entry in accessing internet-exposed RDP. “What is most shocking is the large number of vulnerable RDP systems that did not even have a password,” McAfee Labs said. Cybersecurity Best Practices in Securing RDP Use strong username and password, enable multi-factor authentication, close port 3389, use Network Level Authentication (NLA), and make RDP available only via a corporate VPN VPN, short for virtual private network, when configured correctly and timely patched, offers a secure way to allow remote workers access to your organization’s network. As mentioned above, one of the best practices in securing RDP is by making this protocol available only via a corporate VPN. Like RDP, VPN adoption has seen a big leap since the start of the COVID-19 pandemic. Making RDP available only through a corporate VPN prevents brute force attacks as guessing the correct username and password combination isn’t enough. Like any other software, however, VPN products from different vendors aren’t perfect. Last year, security researchers discovered security vulnerabilities in VPN products, such VPN products from Fortinet, Palo Alto and Pulse Secure. Even as VPN vendors had released security updates, fixing the discovered vulnerabilities, many VPN users still fail to apply the security updates, leaving their corporate VPN vulnerable for exploitation. As early as August 2019, the Canadian Centre for Cyber Security warned about the active exploitation of VPN vulnerabilities. “Due to the fact that VPN devices are typically Internet-facing, it is of the utmost importance that they be kept up to date with the latest patches,” the Canadian Centre for Cyber Security said. Cybersecurity Best Practice in Securing VPN Apply the latest security update The first email was sent nearly 50 years ago. To date, email is the primary form of digital communication relied upon by billions of users worldwide. While there are other forms of digital communications available, people prefer this form of communication the same way as people relied on the snail mail in the past. Through the years, cybercriminals have learned that email is a powerful initial vector in gaining access to victims’ networks. Twenty years ago, an email was sent with the subject “ILOVEYOU”. The email’s body contained these few words: “kindly check the attached LOVELETTER coming from me”. The email came with an attachment named “LOVE-LETTER-FOR-YOU.TXT”. Clicking on the attached document resulted in the following: unauthorized copying and transfer of all cached Windows passwords; overwriting of computer files of the email receivers, denying victims access to their files, and mass emailing of the email to everyone in the receivers’ Outlook address book leading to the overloading of many mail systems around the world. BBC reported Geoff White recently tracked the creator of the email working in a mobile phone repair shop inside a shopping mall in Manila. Onel de Guzman, now 44, admitted to White that he solely created the email containing the “ILOVEYOU” virus, sometimes referred to as “Love Bug” or “Love Letter” virus. The email caused mayhem on May 4, 2000, and in just a span of 24 hours, the ILOVEYOU virus infected an estimated 45 million computers worldwide, causing an estimated US$10 billion in damages. Many of today’s malware programs, such as ransomware, gain access to their victims’ networks by weaponizing emails via spearphishing campaign – a type of a cyberattack that specifically targets victims, crafting malicious emails to suit the target’s profile and tricking the email receiver in clicking a link found in the email body or downloading an attachment. Clicking the said link or downloading an attachment leads to the dropping of a malware on the email receiver’s computer. A recent report from Cisco Talos showed that email remained the top infection vector. Cisco Talos also observed increased compromises of remote desktop services (RDS) as well as compromises of Pulse VPN. Cybersecurity Best Practices in Securing Emails Avoid clicking on links in unsolicited emails and be cautious of email attachments. Your business and IT have many moving components that should help your business operate and grow. Our staff helps you discover all vulnerable points, and protect it using the right processes tools and technologies, including VPN and RDP. Call us today (416) 920-3000 to schedule a free evaluation of your environment, or email us at email@example.com
<urn:uuid:346dd29b-f9ad-4a2c-a249-680383a98fc3>
CC-MAIN-2022-40
https://www.genx.ca/how-to-secure-the-most-common-cyberattack-initial-vectors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00404.warc.gz
en
0.930455
1,289
2.9375
3
The little lock icon that appears on a Web browser window frame when a secure connection exists between a browser and a Web server may be lulling users into a false sense of security. The reality is that secure connections, in which data is encrypted using secure sockets layer (SSL) technology before being transmitted over the Web, is increasingly being used to hide and spread malicious code, according to a report from security vendor Kaspersky Labs. The issue is certainly not new. Security analysts have for long warned about the possibility of hackers exploiting encrypted SSL connections to sneak viruses and other malicious code past firewalls, antivirus software and intrusion detection systems. But what’s lending greater urgency to the issue now is the widespread use of SSL communications by banks, retailers, e-commerce sites and e-mail providers on the Internet, said Shane Coursen, a senior technical consultant at Kaspersky. “A lot of people, when they go to a Web site and see the picture of the lock on their browsers, assume the connection they have with the server is secure” and pay little attention to the data being exchanged, he said. All that a secure connection is designed to do is to verify the identity with whom information is being exchanged and then use encryption to protect the information from being viewed or modified by a third party. There is usually little validation of the content being transmitted during such sessions. As a result, rogue hackers can use the connections as a way to transmit and spread malicious code, including Trojan horse programs and e-mail worms on client systems and Web servers, Coursen said. “There are misconceptions that technologies such as SSL indicate that a Web site is safe when, in fact, it is not,” said John Weinschenk, CEO and president of security firm Cenzic Inc. “A secure sockets layer function certifies that the server the browser is talking to is the genuine site and provides encryption of data being transmitted.” While the technology does have a valid use and does provide some level of security, it still allows hackers to exploit underlying applications, he said. “While large companies have taken significant measures to secure their sites, the fact remains that there are holes hackers can exploit, and personal information can be compromised unless proactive measures are taken.” Traditional antivirus tools and intrusion detection systems are inadequate because they are not designed to detect malware in an encrypted connection. So malicious data within secure channels can cause a significant amount of damage, Coursen said. But options are becoming available to deal with the issue, said Pete Lindstrom, an analyst with Midvale, Utah-based Burton Group. Vendors of intrusion detection systems, for instance, offer tools that can intercept an encrypted data stream, scanning the contents for malware and then passing it along to the destination in encrypted fashion, Lindstrom said.
<urn:uuid:b315a13a-fcff-492a-bd64-74d32bf4c1d1>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/secure-links-offer-new-threat/8107
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00404.warc.gz
en
0.936569
594
2.921875
3
One of the problems with relying entirely on one security solution is that the cyber threat landscape changes rapidly. Antivirus software identifies threats by matching a particular piece of software’s code to programs it has identified as “malicious” in its database. But what happens when an unidentified virus infects a victim’s computer? Antivirus programs can only protect against threats they already know. In today’s world of evolving malware, there are likely a lot of threats antivirus doesn’t know about.The solution, according to a recent Data Center Journal article, is application control. Instead of compiling a list of threats, this technique looks at what is already on a computer and identifies programs as safe, blocking software that doesn’t match. This allows application control solutions to block unknown threats. Software developers can use coded certificates to digitally sign their software, which means any code with that signature becomes a trusted source. Utilizing multiple security solutions may become even more important as the level of sophistication in malware grows. “Targeted attacks can be engineered to seek out a very specific machine, infrastructure or geography,” the article stated. “They can target a single company, maybe with the intention of stealing trade secrets or discrediting that company. If you want a good example, just look at the infection map for Flame: it is tightly grouped around the Gulf States. The other development is the apparent involvement of the nation state.” How protected are you? Another problem is that organizations are likely already affected by malware. According to a recent V3.co.uk article, 95 percent of companies have already fallen victim to attacks from advanced malware and suffer from an average of 643 successful infections per week. However, perhaps even more troubling, is the statistic that there has been a 400 percent increase in the number of infections since last year. The article highlighted the increasing popularity of targeted attacks. Imagine you got an email that looked like it was from a friend. Maybe the text in the email jokes about the trip you took last week and how you came back sunburnt. Would you click on the links in the email, even if it came from an address you didn’t recognize? If you have old vacation pictures on Facebook, a determined hacker could use them to write such an email, and cyber criminals are starting to use that kind of information to craft targets specifically for their victims. “The attacks reportedly use social engineering to create Trojan email campaigns custom-designed for their victims,” the article stated. “The campaigns contain malicious web links and attachments that infect users’ machines with malware when opened.” Have you ever been targeted by a social engineering attack? Was it through email or a social networking website?
<urn:uuid:1d24ec0c-cd9b-403e-a71f-81725a4a043e>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/malware-becoming-more-sophisticated-majority-of-organizations-infected
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00404.warc.gz
en
0.957809
569
2.875
3
Software-defined data center (SDDC) is one of the newest yet most discussed terms among enterprises today. Coined in 2012, SDDC positions software at the forefront of the data center against hardware. Futuristic as it may still sound, SDDC has become, alongside Cloud, IoT and virtualization, more and more of a reality as enterprises continue to look at scalability, security and self-sufficiency as guiding principles for their IT strategy and implementations. What is a Software Defined Data Center (SDDC)? According to Gartner, a software-defined data center (SDDC) is a data center in which all the data center infrastructure is virtualized and delivered “as a service.” The provisioning and operation of the entire infrastructure is thus automated by software. This enables increased levels of automation and flexibility that will enhance business agility through rising adoption of cloud services and enable modern IT approaches such as DevOps. Why is a Software Defined Data Center (SDDC) Important? Gartner Research predicts that by 2018, more than 80% of hardware and software infrastructure vendors will have changed their software development processes, moving from CLI and GUI to API-driven functionality, up from less than 20% today. Simultaneously, by 2020, the programmatic capabilities of an SDDC will be required by 75% of Global 2000 enterprises to successfully implement a complete hybrid cloud deployment and/or DevOps development model, up from 10% today. The analyst firm Research & Markets estimated that the SDDC market will achieve a compound annual growth rate of 22% over the next 5 years to reach a total market size of US$80 billion by the end of 2021. Combined, this means SDDC is here to stay and will likely impact most, if not all, aspects of IT planning and delivery of data center needs.
<urn:uuid:bb3e354d-2d67-45ab-994f-101f7aa73530>
CC-MAIN-2022-40
https://www.nlyte.com/resource/software-defined-data-center-sddc-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00404.warc.gz
en
0.938051
375
2.625
3
Randomness is simulated by the use of probabilities for sequence flows and token routing and also by using statistical distributions to reflect variability in process times of activities etc. To make sure results are valid, the simulation needs be run for long enough to yield random behavior without chance (consider the scenario of tossing a coin or rolling a dice multiple times). Provision should be made to compare results from the same scenario, but different run lengths or replications. The required run length to yield usable outcomes depends on the process model structure, amount of variability and the objective; consequently, a single recommended run length cannot be provided. A replication shares the same scenario configuration and runs for the same length of time, but uses an alternative random stream. Simulation is well known for providing what-if analysis capabilities; a single simulation run can provide valuable insight on the performance of a particular scenario. The simulation of multiple scenarios and the possibility to compare key outcomes, adds further value and support to decision makers.
<urn:uuid:0a4133e1-4225-4209-b71e-d2ceb829b7df>
CC-MAIN-2022-40
https://help.bizagi.com/process-modeler/en/what_is_simulation.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00404.warc.gz
en
0.892015
195
2.578125
3
When looking at wireless access points, I would always find myself curious about one thing: How many concurrent users can use this thing at the same time, before it falls over? It’s a question for which the answer isn’t plastered on the side of the box. More often than not, you’ll find asking one of the technical teams at the manufacturer of your choice will also reward you with vague uncertainty or general disinclination to answer the question. On the odd occasion where you may find your question being answered, it could come in a variety of quantities: “10!” … “50!” … “100!” … “200!” And then you get to the caveats… “…depending on usage…” To understand the proper answer, you have to understand the nature of the question itself. And furthermore, understand the logic behind the technology. Let’s roll-back to a few years ago, when 802.11n was still an idea on a chalkboard, and 802.11g was the cream of the crop. Routers on every shelf in every computer store boasted of “Super Fast” 54Mbps speeds. The way that these speeds are measured means that in actual fact the bandwidth a single user would get will be barely half of that. Practical demonstrations of 802.11g routers delivered throughput with full signal at around 19-22Mbps. This is when tested with a laptop/wireless device with a similar 802.11g card in them. So, 22Mbps we’ll use as the example for our pie. For simplicity’s sake, we’ll say every user connecting uses 802.11g for connecting. Your first user gets the whole pie in one piece, if a second user comes along and connects that pie is split equally between the two. If a third arrives, it splits into thirds, and so on. Going by that method, a base starting bandwidth of 22Mbps could theoretically support approx 20 users that want to use a service at a (by today’s standards) slow speed of approx 1 Mbps. And if you add any more on, they only get a smaller slice of pie each. Now we fast forward, and come into the age of 802.11n (and soon to come, 802.11ac!). Whilst the speeds are greater, the logic behind them stays the same. It’s just a bigger starting pie for everyone to tuck into. And it isn’t really something that can be beaten by throwing money at it. You could have a store-bought £50 54Mbps wireless router, and a £300 54Mbps Enterprise router, and have the same throughput (trust me, I’ve tested such things, and was surprised by that revelation myself!). All you get is the extra bells and whistles (additional monitoring, intrusion detection, multi-SSID, 3/4G backup, etc.). Whilst this might be good in some instances, it doesn’t really bolster the number of concurrent connections. Only bandwidth and radios can achieve that, really. So now you know about the type of numbers your chosen access point can support, you find that one on its own just won’t cut the mustard. But if you put two access points on a shelf next to each other, you won’t be really much better off. In comes a new player to our game; cross-channel interference. For 2.4 Ghz Wi-Fi operation, the wireless spectrum comes neatly packaged in 14 “channel” blocks. These are 5 MHz apart from each other. However, wireless transmissions themselves occupy 22Mhz of channel bandwidth. If we were to use channel 1 (2412 Mhz) for one access point, and channel 2 (2417 Mhz) for another (within the same area), we would find degraded signal quality/throughput as both battle for airspace in that cross-over space. For that reason, there are really only three channels which play nicely with each other. These are channels 1 (2412 Mhz), 6 (2437 Mhz), and 11 (2462 Mhz). So, whilst you have up to 14 channels (in some countries) to choose from, you only really have 3 that are of any practical use in a properly planned/deployed wireless network. This is where brick walls and thick concrete flooring and ceilings can be your best friend. The dBm attenuation (signal loss) caused by these thick solid obstacles mean that you can “isolate” floors and rooms by using the environmental challenges for your own advantage. Challenges which, ironically, would be something that you were trying to get around in previous network plans. However, it isn’t all doom and gloom – there is a better long-term solution to your crowded wireless networks. It’s called 5Ghz. Whilst 2.4Ghz wireless channels ended up having a rather frustrating level of channel overlapping, 5Ghz channels provide a much more convenient channel distribution and subsequently eradicates the issue of channel overlap between your access points. This suddenly presents a network planner with the advantageous route of deploying more AP’s on 5 Ghz in a much more tightly-packed space. This provides more access for more users, whilst also avoiding the interference which plagues older 2.4 Ghz networks. The only drawback with 5Ghz, is that it isn’t as widely adopted with regards to wireless devices. As time goes by, more devices are coming out with “dual radio” wireless cards, allowing users to switch between 2.4 and 5Ghz networks as they please. In a public Wi-Fi hotspot setting, you need to look towards accommodating both network types, as you cannot guarantee that all your visitors will be compatible with 5Ghz frequencies. However, if you are planning for a more office-centric environment, then you could look towards ensuring any wireless devices (and their subsequent compatibility) are 5Ghz compatible to make a far more future-proofed network. Such concepts as BYOD (Bring Your Own Device) could be implemented in accordance with this so that you define your own operating policy within your company (enforcing 5Ghz preference, and eradicating use of 802.11b on the network, etc.). So in conclusion, always remain mindful of how many connections you want to handle for a wireless implementation, and the environment you’re deploying in, to ensure you make the most-informed choice that you can for optimal results.
<urn:uuid:5104718c-7a82-42a5-af0a-1a5e8467e288>
CC-MAIN-2022-40
https://purple.ai/blogs/concurrent-users-vs-clutter-free-airspace-the-curious-balance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00404.warc.gz
en
0.95117
1,382
2.78125
3