text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
“It’s an ongoing spy-versus-spy problem,” Randall Palm, director of IT services for CompTIA, told internetnews.com. “The better we get at stopping one attack, the better they get at exploiting other vulnerabilities.” Of 900 organizations surveyed, 36.8% said they were victimized by one or more browser-based attack, up from 25% last year. A browser-based attack is essentially malicious code contained within a Web page that appears harmless. The attacker uses the browser and user systems permissions to sabotage or disrupt computer functions. A number of browser-based vulnerabilities have been exposed, many of them affecting Microsoft’s Internet Explorer. Just last week, CERT flagged a yet-unpatched flaw that makes use of Compiled Help Files (CHM). In February, a Frame Exploit was discovered that grabs keystrokes. Microsoft last patched Internet Explorer in February against the URL spoofing exploit. Ken Dunham, director of malicious code at iDefense, was not surprised by CompTIA’s finding; his firm has also noted a dramatic increase in malicious code delivered via Web browsers. “This should not be a surprise to anyone in the computer security world, but may surprise some home users,” Dunham said. “With the number of successful exploits against various IE vulnerabilities in recent months it’s a huge problem.” See the complete story on internetnews.com.
<urn:uuid:4749b136-9116-48db-8fbe-8307748bf2a8>
CC-MAIN-2022-40
https://cioupdate.com/beware-the-browser-based-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00313.warc.gz
en
0.935192
308
2.578125
3
As the world advances in technology, people and organizations enjoy the fruits of being connected online. However, threats also exist where attackers can take advantage of the online platform, preventing such cyberattacks must be known to those who are active online, especially those who use IP geolocation. So how can you use IP geolocation to ward off cyberattacks? IP geolocation allows you to determine the physical location of someone or something based on their Internet Protocol address. This technique can be used to know where a cyberattack is coming from, help prevent many types of attacks, and make it more difficult for hackers and bots to access your website or other resources. How Does IP Geolocation work? IP geolocation is the process of identifying the geographical location of a specific IP address. In simple terms, an IP address is a unique set of numbers that identifies each device connected to the internet. This number can be used to locate your computer or mobile device in real time, which means that your location can be tracked at any given moment. Most devices and applications run on location-based services, such as maps and weather reports which ping your real-time position. This is also used in social media and e-commerce platforms, like Facebook using your location data to suggest friends nearby or Amazon recommending products based on your past purchases in that city. Additionally, IP geolocation can also be used for advertising purposes; some companies use it to target ads based on where you’re located so they can sell their products at a higher price due to supply and demand. Avoid Cyberattacks Using IP Geolocation IP geolocation works by using the IP address of a certain device. An IP address is a unique set of numbers that identifies each device connected to the internet and IP geolocation is the process of pinning down the location of that certain IP address. This is usually done by looking at the IP address and finding the location of the ISP that is providing that particular IP address. This is especially helpful when it comes to security issues like detecting fraud or identifying where a cyberattack might originate from. Furthermore, IP geolocation has been used as a tool to track down hackers or cybercriminals who are involved in illegal activities online. You can also use IP geolocation to block certain users from a specific location from accessing your page all in all. Using IP Geolocation Against Cyberattacks The internet is an incredible tool for communication and information sharing, but it also poses a threat for cybersecurity. Hackers can use IP geolocation to steal your personal information or even attack your website. By using IP geolocation as a tool against cyberattacks, you can protect your business from these threats. 1) Limit Your Access IP geolocation is a technique used to determine the location of an internet user by examining their IP address. This information can then be used to determine whether or not the user should be allowed access to your website or system. For example, if you want to keep your website open only to users in your country, you can use IP geolocation to block out all IP addresses that are not from within your country’s borders. However, an attacker could still get around this type of protection by using an IP address from another country, but would at least be more difficult for the threat actor. 2) Prevent Bots From Attacking Your Website Generally, bot is used as a derogatory term to describe automated, software-enabled processes that perform tasks over the internet. Though they can be used for malicious purposes, there are also many good uses for bots. For example: - Website optimization - Security testing Many people don’t realize that their IP address reveals where they live, and how far away their computer is from the server that hosts your website. It also reveals whether or not they’re using a proxy server to conceal their true location. 3) Detect Fraudulent Transactions IP geolocation can also be used to detect suspicious transactions. When you’re looking out for fraud, it’s important to determine whether the transaction is legitimate or not. A transaction might be suspicious because of where it originated, who initiated it, and how they conducted their internet connection at the time of activity. Having so, this will help you identify the different types of fraudulent purchases by tracking IP addresses around the world and identifying when users are connecting from places that do not match up with where they say they live. Moreover, IP geolocation can be used to protect your personal information from fraudsters by detecting suspicious transactions and preventing bots from attacking your website. When someone tries to access your account or make fraudulent transactions on your behalf, they usually do so through fake IP addresses that appear to be coming from another country or city. If you use IP geolocation services, you can see where these fake IP addresses are really coming from and prevent them from accessing sensitive information like credit card numbers or bank accounts by blocking those locations altogether. Others Ways to Prevent Cyber Attacks While IP geolocation is an effective tool for preventing cyberattacks, it is not the only way to protect yourself. You can also train your organization, stay up-to-date, and improve your security protection. 1) Train Your Organization To prevent cyberattacks, you need to be prepared with a plan of action. Train your organization on how to handle cyberattacks by explaining what they look like and how they work. This will help employees recognize if something is happening so they can take steps towards prevention. Train your employees about cyberthreats and what they can do to protect themselves from them (i.e., telling them to use strong passwords and two-factor authentication). The first step in creating a plan is getting educated on the different types of cyberattacks and how they can affect your organization. This will help you create a strategy for prevention, detection, and recovery that fits your organization’s needs and resources. 2) Stay Up-to Date Stay up-to-date with the latest cyberattack trends by reading news articles and blogs about them. This will give you insight into what’s happening in the cyber world so that you have more knowledge about how to safeguard yourself against these threats when they arise. 3) Improve Your Security Protection This includes installing security updates and patches, using antivirus software, and running a firewall. These steps should be taken before you even think about adding IP geolocation to your security system. Improve your security protection by updating software regularly and installing new hardware where necessary (such as firewalls). Even though you may think that you have everything covered because everything seems fine right now, there are always new updates coming out that can help protect against future attacks! Protect Yourself and Your Organization With Abacus IP geolocation is a powerful tool for cybersecurity for any organization. It can be used to reduce the risk of cyberattacks and identify fraudulent transactions. If you would like to learn more about how IP geolocation works and how it can be applied in your organization’s environment, Abacus is here to help you. You can count on Abacus to provide the IT services and products that you need at an affordable price. We help different businesses and organizations run smoother, more efficiently, and more secure. Contact us today and let’s discuss how we can help you.
<urn:uuid:ab3fbbae-e6e6-44d5-a2d4-d7ec5bc72324>
CC-MAIN-2022-40
https://goabacus.com/ip-geolocation-as-a-tool-against-cyberattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00313.warc.gz
en
0.939189
1,516
3
3
John McCarthy coined the term Artificial Intelligence in the 1950s, being one of the founding fathers of Artificial Intelligence along with Marvin Minsky. Also in 1958, Frank Rosenblatt built a prototype neuronal network, which he called the Perceptron. In addition, the key ideas of the Deep Learning neural networks for computer vision were already known in 1989; also the fundamental algorithms of Deep Learning for time series such as LSTM were already developed in 1997, to give some examples. So, why now this Artificial Intelligence boom? Undoubtedly, the available computing has been the main trigger, as we have already presented in a previous post. However, other factors have contributed to unleashing the potential of Artificial Intelligence and related technologies. Next, we are going to talk about the most important factors that have influenced it. The data, the fuel for Artificial Intelligence Source: Júlia Torres — Barcelona Artificial Intelligence requires large datasets for the training of its models but, fortunately, the creation and availability of data has grown exponentially thanks to the enormous decrease in cost and increased the reliability of data generation: digital photos, cheaper and precise sensors, etc. Furthermore, the improvements in storage hardware of recent years, associated with the spectacular advances in technology for its management with NoSQLdatabases, have allowed having enormous datasets to train Artificial Intelligence models. Beyond the increases in the availability of data that the advent of the Internet has led to recently, specialized data resources have catalyzed the progress of the area. Many open databases have supported the rapid development of Artificial Intelligence algorithms. An example is the ImageNet database, of which we have already spoken, freely available with more than 10 million images tagged by hand. But what makes ImageNet special is not precisely its size, but the competition that was carried out annually with it, being an excellent way to motivate researchers and engineers. While in the early years the proposals were based on traditional computer vision algorithms, in 2012 Alex Krizhevsky used a Deep Learning neural network, now known as AlexNet, which reduced the error rate to less than half of what the winner of the previous edition of the competition got. Already in 2015, the winning algorithm rivalled human capabilities, and today Deep Learning algorithms far exceed the error rates in this competition of those who have humans. But ImageNet is only one of the available databases that have been used to train Deep Learning networks lately; many others have been popular, such as: MNIST, STL, COCO, Open Images, Visual Question Answering, SVHN, CIFAR-10/100, Fashion-MNIST, IMDB Reviews, Twenty Newsgroups, Reuters-21578, WordNet, Yelp Reviews, Wikipedia Corpus, Blog Authorship Corpus, Machine Translation of European Languages, Free Spoken Digit Dataset, Free Music Archive, Ballroom, The Million Song, LibriSpeech, VoxCeleb, The Boston Housing, Pascal , CVPPP Plant Leaf Segmentation, Cityscapes. It is also important to mention here Kaggle, a platform that hosts competitions of data analysis where companies and researchers contribute and share their data while data engineers from around the world compete to create the best prediction or classification models. Entering into an era of computation democratization However, what happens if you do not have this computing capacity in your company? Artificial Intelligence has until now been mainly the toy of big technology companies like Amazon, Baidu, Google or Microsoft, as well as some new companies that had these capabilities. For many other businesses and parts of the economy, artificial intelligence systems have so far been too expensive and too difficult to fully implement the hardware and software required. But now we are entering another era of democratization of computing, and companies can have access to large data processing centers of more than 28,000 square meters (four times the field of Barcelona football club (Barça)), with hundreds of thousands of servers inside. We are talking about Cloud Computing. Cloud Computing has revolutionized the industry through the democratization of computing and has completely changed the way business operates. And now it is time to change the scenario of Artificial Intelligence and Deep Learning, offering a great opportunity for small and medium enterprises that cannot build this type of infrastructure, although Cloud Computing can offer it to them; in fact, it offers access to a computing capacity that previously was only available to large organizations or governments. Besides, Cloud providers are now offering what is known as Artificial Intelligence algorithms as a Service (AI-as-a-Service), Artificial Intelligence services through Cloud that can be intertwined and work together with internal applications of companies through simple protocols based on API REST. This implies that it is available to almost everyone, since it is a service that is only paid for the time used. This is disruptive, because right now it allows software developers to use and put virtually any artificial intelligence algorithm into production in a heartbeat. Amazon, Microsoft, Google and IBM are leading this wave of AIaaS services, are put into production quickly from the initial stages (training). At the time of writing this book, Amazon AIaaS was available at two levels: predictive analytics with Amazon Machine Learning and the SageMaker tool for rapid model building and deployment. Microsoft offers its services through its Azure Machine Learning which can be divided into two main categories as well: Azure Machine Learning Studio and Azure Intelligence Gallery. Google offers Prediction API and the Google ML Engine. IBM offers AIaaS services through its Watson Analytics. And let’s not forget solutions that already come from startups, like PredicSis or BigML. Undoubtedly, Artificial Intelligence will lead the next revolution. Its success will depend to a large extent on the creativity of the companies and not so much on the hardware technology, in part thanks to Cloud Computing. An open-source world for the Deep Learning community Deep Learning Frameworks (source: https://aws.amazon.com/ko/machine-learning/amis/) Some years ago, Deep Learning required experience in languages such as C++ and CUDA; Nowadays, basic Python skills are enough. This has been possible thanks to the large number of open source software frameworks that have been appearing, such as Keras, central to our book. These frameworks greatly facilitate the creation and training of the models and allow abstracting the peculiarities of the hardware to the algorithm designer to accelerate the training processes. The most popular at the moment are TensorFlow, Keras and PyTorch, because they are the most dynamic at this time if we rely on the contributors and commits or stars of these projects on GitHub. In particular, TensorFlow has recently taken a lot of impulse and is undoubtedly the dominant one. It was originally developed by researchers and engineers from the Google Brain group at Google. The system was designed to facilitate Machine Learning research and make the transition from a research prototype to a production system faster. If we look at the Gihub page of the project we will see that they have, at the time of writing this book, more than 35,000 commits, more than 1500 contributors and more than 100,000 stars. Not despicable at all. TensorFlow is followed by Keras, a high level API for neural networks, which makes it the perfect environment to get started on the subject. The code is specified in Python, and at the moment it is able to run on top of three outstanding environments: TensorFlow, CNTK or Theano. Keras has more than 4500 commits, more than 700 contributors and more than 30,000 stars. PyTorch and Torch are two Machine Learning environments implemented in C, using OpenMP and CUDA to take advantage of highly parallel infrastructures. PyTorch is the most focused version for Deep Learning and based on Python, developed by Facebook. It is a popular environment in this field of research since it allows a lot of flexibility in the construction of neural networks and has dynamic tensors, among other things. At the time of writing this book, Pytorch has more than 12,000 commits, around 650 contributors and more than 17,000 stars. Finally, and although it is not an exclusive environment of Deep Learning, it is important to mention Scikit-learn, that is used very often in the Deep Learning community for the preprocessing of data. Scikit-learn has more than 22500 commits, more than 1000 contributors and nearly 30,000 stars. But as we have already advanced, there are many other frameworks oriented to Deep Learning. Those that we would highlight are Theano (Montreal Institute of Learning Algorithms), Caffe (University de Berkeley), Caffe2 (Facebook Research) , CNTK (Microsoft), MXNET(supported by Amazon among others), Deeplearning4j, Chainer , DIGITS (Nvidia), Kaldi, Lasagne, Leaf, MatConvNet, OpenDeep, Minerva and SoooA , among many others. An open-publication ethic In the last few years, in this area of research, in contrast to other scientific fields, a culture of open publication has been generated, in which many researchers publish their results immediately (without waiting for the approval of the peer review usual in conferences) in databases such as arxiv.org of Cornell University (arXiv). This implies that there are numerous softwares available in open source associated with these articles, which allow this field of research to move tremendously quickly, since any new discovery is immediately available for the whole community to see it and, if it is the case, build on top a new proposal. This is a great opportunity for users of these techniques. The reasons for research groups to openly publishing their latest advances can be diverse. For example, articles rejected in main conferences in the area can propagate solely as a preprint on arxiv. This is the case of one key paper for the advancement of Deep Learning written by G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R. Salakhutdinov that introduced Dropout mechanism. This paper was rejected from NIPS in 2012. Or Google, when publishing the results, consolidates its reputation as a leader in the sector, attracting the next wave of talent, which is one of the main obstacles to the advancement of the topic. Advances in algorithms Thanks to the improvement of the hardware that we have already presented and to having more computing capacity by the scientists who were researching in the area, it has been possible to advance dramatically in the design of new algorithms that have allowed to overcome important limitations detected in the previous algorithms. For example, not until many years ago it was very difficult to train multilayer networks from an algorithm point of view. But in this last decade there have been impressive advances with improvements in activation functions, use of pre-trained networks, improvements in training optimization algorithms, etc. Today, algorithmically speaking, we can train models of hundreds of layers without any problem. The data that appear in this section are available at the time of writing this section (Spanish version of the book) at the beginning of the year 2018. Kaggle [online]. Available at: ttp://www.kaggle.com [Accessed: 12/03/2018] Empresas en la nube: ventajas y retos del Cloud Computing. Jordi Torres. Editorial Libros de Cabecera. 2011. Wikipedia. REST. [online]. Available at: https://en.wikipedia.org/wiki/Representational_state_transfer [Accessed: 12/03/2018] Azure ML Studio [online]. Available at: https://azure.microsoft.com/en-us/services/machine-learning-studio/ [Accessed: 12/03/2018] Google ML engine [online]. Available at: https://cloud.google.com/ml-engine/docs/technical-overview [Accessed: 12/03/2018] See https://github.com/laonbud/SoooA/
<urn:uuid:82fc22cc-441e-41a8-aa04-2a7d35c8312f>
CC-MAIN-2022-40
https://resources.experfy.com/ai-ml/why-now-this-artificial-intelligence-boom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00313.warc.gz
en
0.929748
2,683
3.515625
4
Over the past several years, as sophisticated nation-state cyberthreats have become a reality, and attacks on critical infrastructure have become commonplace, the concept of offensive cybersecurity has gained more mainstream traction. For almost all organizations—public and private, traditional, defensive cybersecurity has and will always be the only approach. The adoption of an offensive approach, much like a boxer cornering their opponent in the ring, has risks that may be unacceptable, or even illegal, if the punch is not carried out with extreme precision. Actively fighting back is a very risky move in cybersecurity. As a definition, offensive cybersecurity is a proactive approach that involves launching a cyberattack against adversaries to disrupt or cripple their operations and to deter their future attacks. This approach is sometimes referred to as “hacking back” and relies on determining accurately who is conducting the attacks against you. In general, the targets of cyber offensives are threat actors that have been identified as launching cyberattacks against you or your organization. As every security professional should know, hacking back is not a trivial exercise, and the approach can be riddled with flaws. Currently, the practice of hacking back remains illegal as it would violate the Computer Fraud and Abuse Act (CFAA), which was first enacted in 1986. However, a bill was recently introduced to U.S. congress that would allow organizations to take offensive actions against their IT network’s intruders. The largest problem with any offensive cybersecurity strategy is the risk, or perceived risk, of an attack being launch that is a mistake. A full-fledged cyber offensive could inflict devastation comparable in scale to a conventional war or nuclear bomb. This is not farfetched. If an attack occurred within critical infrastructure (CI) or departments like the FAA, we could see poisons in our water supply, massive loss of power, and even manipulation of civil aircraft. These are the risks of any offensive attack at scale. For instance, imagine a company accidentally targeting CI because they think the attack they are experiencing is originating from them? What if CI was “owned” and used to launch DDOS attacks, such as from the Mirai botnet, and someone decides to hack back against the CI? I think you can see the flaws of this offensive/retaliatory approach. Moreover, the ramifications of an inappropriate “punch” back could warrant an escalation that many organizations are not prepared to deal with technically and legally. Next, consider the growing use of artificial intelligence (AI)—particularly with regards to IT security automation and orchestration. AI is based on machine learning algorithms—programs that learn based on example and formulate results derived from statistics or other models. While AI lacks a concept of good or bad, it could be programmed with parameters to differentiate between “good” and “bad” behaviors or desired outcomes. The problem is that AI can learn bad behavior, like a young child, and could initiate a very undesirable response—much like a temper tantrum. If AI is allowed to automatically attack back, then a cyberwarfare scenario could escalate quickly beyond control. This is not a doomsday SkyNet scenario, but more like multiple network cards jabbering on a 10base2 (yes, old school) network, drowning out all legitimate communications. As a real-world example, consider the streaming of video. The desired result is clear—multicast packets to all the targets subscribing to the stream. If a network device in-line corrupts those packets due to a hardware / software fault or another attack, the received packets could be malformed. AI could misconstrue these malformed packets as an attack, or the potential exploitation of a vulnerability. Web content filtering solutions today can easily make this mistake even when something as simple as the source of the video stream is not recognized. Think this sounds crazy? This is actually what signature-based intrusion detection system (IDS) solutions do today. If intrusion prevention system (IPS) engines are empowered with data for automated actions, the result could be to terminate the stream, or worse, attack back against the source. The scenario of triggered automated responses as outlined above is why even conventional warfare is locked down. Automated threat responses, especially for offensive behaviors, should always be verified and never trusted as is. Otherwise, the implications could be life-threating. While automation in many forms is helping IT and IT security solve issues of scalability and efficiency, due caution should always be given to technologies that offer full automation; especially of an offensive nature. This level of caution should be even higher for automation technologies and platforms that are governed by AI, where the logic to initiate a response may not even be logically explainable. And, some highly sensitive areas of decision-making are probably always best left to humans—as imperfect as we are. In reality, the Internet is fragile—using it for a cyber offensive or for hack back initiatives is a terrible idea. Actions and reactions can rapidly spiral out of control, and AI with automation could make it substantially worse. In this security professional’s opinion, stick with the best defensive IT security technologies and avoid the hype, legal issues, and potential harm of taking an offensive cybersecurity posture. Leave the offensive approach to your government and its cybersecurity programs. Morey J. Haber, Chief Security Officer, BeyondTrust Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.
<urn:uuid:02dea624-558c-4619-b3cc-f3615f5330c0>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/thinking-about-a-cybersecurity-offensive-beware-the-collateral-damage
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00313.warc.gz
en
0.962779
1,291
2.515625
3
VARCHAR2 is a variable-length alphanumeric data type. In PL/SQL, it may have a length up to 32,767 bytes. When you define the VARCHAR2 variable in the DECLARE section, remember to terminate the line with a semicolon (;). The following is the form of VARCHAR2 variable declarations: where MAX_LENGTH is a positive integer, as in You can also set an initial or default value for the variable. This is done on the same line as the variable declaration in the DECLARE section of your program. You can do this by using the following syntax: The preceding statement will set that value of the variable l_name to the value of ABRAMSON.
<urn:uuid:ab8f56f0-c5b7-495b-ad24-0745262c1676>
CC-MAIN-2022-40
https://logicalread.com/oracle-12c-varchar2-data-type-mc05/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00313.warc.gz
en
0.854733
154
2.90625
3
2017 was the year of hackers. From content management systems, e-commerce portals, data breaches to hacked websites of financial institutions, cybercrime is on the rise with every passing year. 2017 witnessed staggering cyber attacks like the massive financial data theft in the Equifax Data Breach, the WannaCry Cyber Attack which was 2017’s deadliest ransomware attack, the Petya Ransomware attack, and the notorious yahoo data breach to name among a few. How Are Websites generally hacked? Google recently noted that it had seen a 32% rise in the number of hacked websites. The most common ways websites get hacked are: - Missed Security Updates: Missing out on security patches and updates renders software vulnerable to attackers. It is imperative to make sure that the web server software, CMS, plugins, and other essential software are all set to update automatically. - Weak Passwords: Compromised account credentials is one of the most rampant causes of hacked websites. Users often do not abide by strong password rules or use the same password for multiple accounts leading to a compromised account. A hacker can guess your password via random guesses or try different variations. - Data leaks: On being mishandled, or improperly uploaded, data can become available as part of a leak. A method called “Dorking” can utilize common search engines to find the compromised data. - Insecure themes and plugins: Attackers commonly add malicious code to free versions of paid plugins or themes. Therefore, while it is important to ensure patched plugins and themes, it is essential to remove themes or plugins which are no longer maintained by their developers. Moreover, be wary of using free plugins or those available through an unfamiliar website. What hackers can do with hacked websites But once information is compromised, what can a hacker potentially do with this data? While there are many speculations about this, getting into an attacker’s shoes to understand his post-hack actions can help you minimize damage from potential future cyber attacks. A hacker’s post-breach checklist is likely to contain the following actions: Form a Stolen Data Inventory Post-breach, hackers form an inventory of financial information like credit card details, authentication credentials, personal information like names, addresses, and phone numbers. Personal information like names, credentials, addresses, and phone numbers are sold by hackers in bulk. Quartz estimates that a full set of someone’s personal information including identification number, address, birthdate, and possibly credit card info can cost between $1 and $450 in the black market. Filter out the good stuff After creating an inventory of all breached authentication accounts, hackers then look for the most lucrative ones which can be sold for higher values than the rest. Information like government and military addresses are extremely valuable. Other valuable information would include company email addresses and passwords for large corporations. A hacker can sell these credentials to others on the dark web for a much higher price. Moreover, hacks from one website can lead to information breach on another. For example, when Dropbox was breached in 2012, it as done so using credentials stolen during the Linkedin data breach, the same year. Users often re-use the same passwords for multiple accounts, and this is something that a hacker usually exploits. Sell the cards Hackers generally sell financial information like credit card numbers in bundles to individuals with the right knowledge in groups of ten or a hundred. This information is usually sold to a “broker” who then sells them to a “carder” without being detected. These “carders” could use stolen credit card information to purchase gift cards to stores or to Amazon.com, and then use those cards to buy physical items. The gift card, in turn, is used to purchase high-value goods such as electronics, making it difficult for companies to trace. By the time the company figures out the fraud and the cards are blocked, the criminal is in possession of the purchased goods. These packages are usually then shipped via a re-shipping scam. Using legitimate channels such as Craigslist job listings, unsuspecting individuals are recruited as Mules (re-shippers). The goods are then usually shipped outside the country, or directly to someone who purchases the goods from an auction site the fraudster has posted the goods to. The hacker will collect authentication credentials and sell them in bulk at a discounted price. Post-breach, the hacked company would take necessary measures to repair the damage and change credentials making most of the credentials worthless. Hence, the hacker benefits from a bulk sale at discounts. Also, check our video on Simple WordPress Security Tips to Follow in 2018. Download our Secure Coding Practices Checklist for Developers to reduce the chances of getting hacked. Worried about your website’s safety in light of rampant online vulnerabilities? Contact Astra’s Web Security Suite to know more for further protection.
<urn:uuid:406f1197-b1d8-4c61-98e3-597409ab3308>
CC-MAIN-2022-40
https://www.getastra.com/blog/911/what-do-hackers-do-with-hacked-websites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00313.warc.gz
en
0.926622
1,021
2.984375
3
What is the National Cybersecurity Center of Excellence (NCCoE)? The National Cybersecurity Center of Excellence (NCCoE), established in 2012, is part of the National Institute of Standards and Technology (NIST). The NCCoE is a collaborative group where industry groups, government agencies, and academic institutions work together on pressing cybersecurity issues. The NCCoA defines Cooperative Research and Development Agreements (CRADAs) among its members to develop standards- and best-practices-based cybersecurity solutions using commercially available technology, which are then documented in the NIST Special Publication 1800 series. The organization's goal is to bring together experts from industry, government, and academia to address the real-world needs of securing complex IT systems and protecting the nation’s critical infrastructure.
<urn:uuid:b4a512ee-bdc3-4dd2-9a36-d7364a160556>
CC-MAIN-2022-40
https://www.digicert.com/support/resources/faq/compliance/what-is-the-national-cyber-security-center-of-excellence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00313.warc.gz
en
0.910216
164
2.640625
3
Cybersecurity continues to be a growing concern as humans rely more and more on technology in their daily lives. Since the start of the COVID-19 pandemic, even more transactions, communications, and exchange of personal information occurs virtually than ever before. Thankfully, the majority of this happens without incident. However, there are a growing number of cyber scams and attack methods out there, so taking steps to protect yourself online should be a priority for all internet users. Some scams target information, such as personal data like addresses, social security numbers, and dates of birth. Others are designed to acquire and use payment information to make unauthorized purchases – often within minutes of a security breach. There are also those attacks that transmit harmful or corrupted files or codes, causing damage to hardware devices. Cyber scams and attacks are not only frustrating and time-consuming to fix. They often also lead to financial repercussions – via fraudulent payments, damaged credit, etc. Keeping in mind the types of most common cyber attacks, we’ve gathered a list of 5 tactics everyone should use to help maintain a healthy online presence and prevent cyberattacks and the financial and emotional impact that often come with them. While this list certainly isn’t all-inclusive – it is a great start for those looking to take their cyber security more seriously moving forward. - Use Two-Factor Authentication Wherever Possible If you’ve created any online accounts recently, you likely received an offer to activate two-factor authentication. Rather than just a traditional username and password needed to log in, this requires a second step. Generally, this is a code or link sent to the user via text or email. Entering the code or clicking the verification link then completes the login process. The extra step does make logging in more than a one-click process. For many, that seems like an avoidable nuisance. However, the added security protects against bots and programs designed to decrypt your login information as they are unable to complete the second step. When offered by a website, we always encourage users to take advantage of two-factor authentication to better protect their accounts. An extra 30 seconds each login is nothing compared to a breach in your personal internet security. - Don’t Give Our your Medicare Number for COVID-19 Test Kits Because of the high demand for COVID-19 Test Kits over the past several months, those seeking these kits have been a common target for cyber scams. There are numerous companies offering test kits across the country. So, it’s sometimes difficult to tell which are legitimate versus potentially fraudulent. A good rule of thumb is to avoid companies that ask for your confidential or financial details, especially your Medicare information. Also, check local government websites to find legitimate testing locations rather than relying solely on advertisements. - Be Aware of Phishing Campaigns Since nearly the beginning of the email, phishing scams have existed and wreaked havoc on unknowing internet browsers. The act of phishing involves sending emails designed to fraudulently pose as reputable companies to get recipients to give out their personal information. The emails look like and sound like the original business, so it’s hard to tell the difference – except for maybe small details in the sender address. A good habit is avoiding links sent through email campaigns, especially if you have any doubts about their authenticity. Going directly to the company’s website means you know for sure transactions made are legitimate. - Be Careful Using Public WiFi This one takes many internet users by surprise because free WiFi in public places is good – right? Yes, having internet access at your favorite restaurant or your gym is a convenience that most of us utilize at least once daily. However, these networks are also accessible by online scammers. It’s smart to avoid sending confidential information or completing payment transactions using a public network. To be safe, rely on your own home (and password-protected) network for these. - Partner with a Team of IT Experts for Top Security IT and internet security can seem like overwhelming concepts, with the number of cases of fraud and stolen identity continuously increasing. Keeping up-to-date regarding common cyber scams and how to browse, communicate, and shop safely is a wise place to start. However, cybersecurity needs apply to more than just consumers and internet browsers. Businesses also need to protect themselves. A security breach with them could affect their customers’ security as well. For organizations wanting top protection for their business, employees, and customers – partnering with IT experts is often the best route for IT management. Industry experts can help develop and implement a plan to protect a company’s data – including confidential files, emails, and network communications. Cybersecurity Services From Kustura Technologies We understand the importance of both individuals and businesses protecting themselves against cyber attacks. Because most modern businesses utilize online communications or storage in some form, they are all vulnerable to data breaches – be that of confidential communications, invoices, or other sensitive information. And, it’s not just big corporations at risk. Small and mid-sized businesses are common targets of cyber scams – as are their customers. We love helping our business customers create and implement a cybersecurity plan to protect themselves and the information they share daily. Because each business differs when it comes to its IT and security needs, a tailored approach is often best. Our top priority is giving our customers peace of mind when it comes to managing their IT infrastructure and devices. If you’re looking for similar peace of mind at your organization – please contact us today! Contact us today to take advantage of this offer and get your FREE Cybersecurity Assessment.
<urn:uuid:cdda94e7-ca1f-4a9f-9ad8-b01670f4d03c>
CC-MAIN-2022-40
https://www.kustura.com/5-tips-to-keep-your-personal-information-safe-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00313.warc.gz
en
0.942288
1,155
2.6875
3
CPU VS GPU The basic idea: - CPU = 1 to 32 cores with a lot of cache. - GPU = less cache, but way more cores. CPUs and GPUs are quite similar. They both process thousands of operations per second and have a noticeable impact on the computers performance. So what is the difference between them? Is the central processing unit of a computer and is often referred to as the brain of the computer. It commonly has anywhere from 1 to 32 cores with a large cache that runs at a high clock speed and is capable of handling a wide variety of tasks that the computer throws at it very quickly. It can often have a graphics chip integrated right on, allowing it to also handle light graphical tasks. Some CPU intensive tasks are compiling programs, data mining, financial/scientific modeling, and video encoding. Is the graphical processing unit and is generally a supporting unit for the CPU and is optimized for handling image processing. It commonly has thousands of cores but very little cache and generally runs at a lower clock speed as compared to a CPU. Thanks to the really high core count, the GPU is really good at running thousands of small, simple, tiny equations, which is exactly what is needed in graphics. GPUs are built on a Single Instruction Multiple Data (SIMD) architecture which is when a given set of data has the same sequence of operations to be performed. This type of data can be processed all together as a stream of data. Some GPU intensive tasks are 4K/8K video, using multiple high resolution monitors and 3D rendering. In the end you need both devices along with correct firmware working as one to have a properly functioning system.
<urn:uuid:385a0c04-f95a-4957-af28-1fb5f6588923>
CC-MAIN-2022-40
https://www.ami.com/blog/2018/08/24/cpu-vs-gpu-what-is-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00313.warc.gz
en
0.963001
346
3.765625
4
Data lineage describes how data transforms and flows as it is transported from source to destination, across its entire data lifecycle. It helps organizations get the full story behind their data so they can use their data to make impactful business decisions. Why is data lineage important? Data lineage is important because it ensures that an organization’s data is accurate and trusted. Without data lineage, business analysts have no visibility into the correctness of their data, and therefore, could be basing important decisions off of inaccurate and incomplete data. Data lineage enables business analysts to see where their data is coming from so that they can be sure they are using the right data to drive business decisions. It also helps IT and data engineers by automating lineage extraction so that they no longer have to manually map data lineage in Excel spreadsheets, therefore freeing up IT’s time for strategic initiatives. With complete data lineage, data engineers can quickly and easily identify the impact of any changes they are looking to make. More specifically, data lineage is important because it results in four key benefits that affect the entire business. Data lineage helps organizations in the following ways: - Comply with regulations - Automate data mapping efforts - Better understand and trust your data - Save time doing manual impact analysis Take a deeper dive into data lineage benefits and see how data lineage can enable your organization to become data driven. What is a data lineage tool? A data lineage tool automatically maps relationships between data points to show how data moves from system to system and how data sets are built, aggregated, sourced and used — providing complete, end-to-end lineage visualization. An enterprise-grade data lineage tool should include features such as: - Automated lineage extraction: discover and extract lineage automatically from source systems for an end-to-end view of your data with visibility into full data context - Summary business lineage: trace data flows with an interactive data map that shows summary lineage from data source to report - Detailed technical lineage: view transformations, drill down into table, column, and query-level lineage, and navigate through your data pipelines - Indirect lineage: view direct data flows across assets as well as participating indirect relationships that influence the movement of data, such as conditional statements and joins - In-line context of code: easily identify and drill down into relevant table and column-level code within lineage diagram - Export lineage diagrams: extract lineage state diagrams in different file formats for reporting and regulatory purposes (PDF, PNG, CSV, etc.) Data lineage use cases Data lineage can help the Chief Data Officer comply with regulations. It helps the business analyst make more accurate decisions. And it helps IT spend less time manually mapping data and more time on strategic initiatives. In particular, data lineage can help a large enterprise with three distinct use cases: Data lineage helps businesses comply with regulations such as BCBS 239, GDPR, and CCPA by providing a complete view into your data. This allows you to quickly create reports on the data so that you can provide a deeper understanding of the data for regulatory purposes. With automated mapping, you can show regulators where your data is across your organization, as well as in third party data sources. This creates a complete view of your data for compliance purposes. Data lineage enables more accurate analytics and decision-making by providing important context around your data. Business analysts can see upstream and downstream lineage to discover relevant data context, such as source changes and usage. With more context, business analysts can identify how a data asset was created and where it came from. This ensures that the data you use to make business decisions is accurate, complete and trustworthy. Data lineage makes it easier to conduct an impact analysis at a granular level. Data lineage diagrams allow you to easily identify the upstream and downstream impacts of any particular change. You can drill down and see the impacts on a table, column or business report level. Additional use cases In addition to these three main use cases, data exploration and viability, rationalization and cloud migration, and asset management are three additional use cases where data lineage can help. Data exploration and viability allows you to improve discovery capabilities to ensure more accurate analytics and decision making. Rationalization and cloud migration is another big use case for data lineage. It helps assist planning and execution of data modernization initiatives (e.g., DWH to cloud) by identifying and documenting the critical data elements for cloud migration. Finally, data lineage can help with asset management. It helps you identify the least and most usable (and certified) data assets across the enterprise As these six use cases show, data lineage really helps across the entire enterprise. It ensures digital transformation by providing the necessary context to unlock the value of an organization’s data. Types of data lineage There are two different types of data lineage — business lineage and technical lineage. Rudimentary data lineage solutions only have business lineage; more advanced data lineage tools have both business and technical lineage. Business lineage provides only a summary view. It shows an interactive map that traces data flows from source to report. Business lineage is an important tool for business analysts who want to see where their data is coming from to ensure they are using data from a reliable source, but do not want to be bogged down by every alteration in the data. In contrast, detailed technical lineage allows IT and data architects to view transformations, drill down into table, column, and query-level lineage, and navigate through their data pipelines. Together, business lineage and technical lineage provide a holistic view of an organization’s data so that data citizens in all departments and roles can use data to make accurate business decisions. How to use data lineage in your business Without automated data lineage, IT must manually maintain lineage in Excel spreadsheets. This means IT must build the mappings and keep them up to date, which takes a massive amount of time, especially for enterprises with large amounts of data that is scattered across databases and systems. This waste of time can result in financial loss and impede innovation. With a data lineage tool, organizations can avoid this headache by automatically mapping the flow of data from source to destination. This gives the entire business visibility into where the data comes from, how it has been transformed and its accuracy As a result, automated data lineage frees up time for IT to focus on more strategic initiatives and helps the business make more informed decisions. Because of the visibility into data relationships provided by data lineage, business analysts will be able to ensure that trustworthy data is used in business analysis, building confidence in and extracting value from data across the organization.
<urn:uuid:8e0d36b8-705f-43f6-ae6c-e7772efbaea2>
CC-MAIN-2022-40
https://www.collibra.com/us/en/blog/what-is-data-lineage
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00313.warc.gz
en
0.917529
1,356
2.53125
3
Many of us take internet access for granted. But statistics tell a different story for some communities — a 2020 report from Common Sense and Boston Consulting Group found that nearly 30% of all K–12 public school student households are without either internet connection or an adequate learning device. Much of these effects are felt unevenly. The report revealed that 18% of white households lack broadband compared to 26% of Latino, 30% of Black and 35% of Native American student households. While the impacts of the digital divide were felt by many low-income students and those living in rural areas long before the pandemic, the abrupt switch to fully remote learning last spring has created a new urgency to address a long-standing issue. Whereas in the classroom, students had the same access to technology, distance learning eliminated this opportunity for a more equal playing field. In rural areas, connection often isn’t strong enough to stream classes and complete online homework assignments — especially if there are multiple people in the same household trying to connect at once. And even if an area does have the bandwidth, many low-income families cannot afford the monthly subscription fees or the hardware devices necessary for remote learning. Students facing these challenges experience significant learning loss as a result. In the 2020 Insight Public Sector EdTech Forum, education IT pros discussed the ways they’ve worked to bridge the technology divide. Some schools have opened their networks, allowing students to visit campus for connection. Others have prioritized procuring more devices and distributing mobile hotspots. While the solution varies depending on the needs of the individual community, the panelists did agree on one thing: We must do more to address digital inequity. In the short term, many schools have focused on providing access to as many students as possible — a result of the desperate need created by the pandemic. For long-term solutions, schools and communities intend to view wireless as a necessity, aiming to become their own service providers. This not only supports remote schooling, but also telehealth services and smart city initiatives. Like many others across the country, students in rural Hidalgo County, Texas, found themselves without the bandwidth necessary for successful learning in the switch to online instruction. Determined to reduce digital inequity, lawmakers allocated money from federal funding to a public Wi-Fi solution. This helped give students the connection needed to encourage learning, and the county was able to provide internet access to more than 30,000 students and teleworkers. As wireless connection becomes increasingly important for students and remote workers, community wireless broadband solutions like Hidalgo County’s play a critical role in providing large-scale, secure connection. And with the right system in place, residents will continue to benefit from these solutions long after the pandemic. In the spirit of education, it would be a loss to not evolve from the lessons learned over the past year of distance instruction. With greater attention to this issue, schools are finding new ways to adapt to remote and hybrid learning — and even developing innovative approaches to incorporate technology back in the physical classroom. New funding sources such as the CARES act and the American Rescue Plan Act are enabling school districts and communities to begin addressing the needs of underserved areas. The Federal Communications Commission’s Emergency Connectivity Fund in conjunction with the E-Rate program seeks to ensure that every student has the technology tools they need at home as well as on campus. With the expansive capabilities of today’s technology, the quality of a child’s education should not have to suffer due to a lack of resources. While certainly not an overnight fix, focusing on this issue and embracing available IT solutions is a critical step toward democratizing internet access and creating a more powerful education experience. Transforming the future of education for our children starts today with working to bridge the digital divide — one step at a time.
<urn:uuid:abb0378b-838f-49f6-9081-50694e7a98b6>
CC-MAIN-2022-40
https://www.insight.com/en_US/content-and-resources/blog/building-brighter-futures-by-bridging-the-digital-divide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00513.warc.gz
en
0.955719
782
3.140625
3
There’s a flood of connected devices making their way into our homes and businesses – a deluge exacerbated by recent holiday gifts and the latest and greatest from CES, where connected devices always take top billing. From mobile, wearables and car technology to advancements in smart homes, TVs and cameras, the tech world is awash with internet-connected devices. By 2020, it’s estimated that there will be more than 30 billion connected devices in the world – more than four times the earth’s population. Hackers are watching Tech-hungry consumers keep their eyes peeled for major device announcements. Also watching are distributed denial of service (DDoS) attackers who have made the Internet of Things (IoT) their weapon of choice. These nefarious actors exploit millions of vulnerable IoT devices to create sophisticated malware-based DDoS botnets they then use to initiate devastating attacks. IoT vulnerabilities give these hackers the ability to scale their attacks across tens of millions of devices and unique IP addresses. These new device announcements add more weapons to an already stocked arsenal of connected gadgets hackers have at their disposal that they can weaponise and leverage to launch DDoS attacks. If we’ve learned anything from the Mirai botnet’s path of destruction in late 2016, during which attackers hijacked more than 500,000 webcams to launch a DDoS attack topping 1 Tbps, and last year’s WireX and Reaper threats, it’s that bad actors will latch onto unsecured devices and use them to do their bidding. “Millions of unsecure, internet-enabled devices provide new threat vectors. Given the rapid proliferation of Internet of Things devices in advance of IoT-oriented security standards and configuration practices, expect these devices to be increasingly used as weapons for DDoS and other attacks,” said Adam Isles, principal at The Chertoff Group, a global advisory firm that provides security risk management, business strategy and merchant banking advisory services. IoT threats a growing concern among businesses According to a recent AT&T Cybersecurity Insights report, nearly a third (32 percent) of surveyed organisations said IoT-based DDoS attacks are their biggest future cybersecurity concern. AT&T found that more than a third (35 percent) of all its survey respondents say IoT devices were the primary source of a data breach experienced over the prior year. And the outlook for future IoT attacks remains bleak, with 68 percent of survey respondents saying they expect IoT threats to increase in the coming year. That said, AT&T found that 90 percent of organisations have conducted enterprise-wide cyber risk assessments in the past year, but only half (50 percent) have conducted risk assessments specific to IoT threats. Meanwhile, according to this Application Intelligence Report (AIR), distributed denial of service (DDoS) attacks took the top spot among cyberthreats against businesses, with more than one third (38 percent) of IT decision makers saying their company has suffered an attack at least once over the past 12 months, with another 9 percent noting they’re not aware whether they’ve been attacked or not. Frighteningly, that means that nearly half of IT professionals say their company has either been a victim of a DDoS attack or they don’t know if they’ve been a victim. AIR respondents, however, don’t fear IoT as much as they probably should. For example, AIR respondents rank laptops as the most vulnerable type of device, more so than smartphones and even more so than IoT devices, a misperception that, if exploited, could give hackers an inroad into corporate networks. This rash of IoT-based DDoS attacks when paired with lack of awareness and the growing roster of IoT devices hitting the market creates a potentially catastrophic cocktail of opportunity for savvy cyberattackers. The consensus: IoT-based DDoS attacks will grow in both bot size and traffic volumes mostly due to their use of vulnerable, poorly-secured IoT devices. Contributing to those millions of vulnerable IoT devices will be this year’s crop of marquee CES announcements and the myriad gadgets found under the Christmas tree. Protection from IoT DDoS attacks The rise of IoT DDoS attacks makes it imperative to rethink DDoS defences to thwart these sophisticated and often devastating threats. Here are key things to look for in an effective DDoS defence solution to ensure that IoT DDoS attacks can’t take you down: - DDoS defence solutions should be capable of detecting, mitigating and reporting on multi-vector DDoS attacks at the network edge and in centralised scrubbing centers to scale to defend against colossal IoT-fueled attacks - DDoS defence solutions must differentiate botnet traffic from legitimate traffic and users, so services stay available when battling an attack - DDoS defence solutions should include intelligence into known botnets and agents to defend networks against known threats - DDoS defence solutions must scale yet maintain cost-efficiency Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.
<urn:uuid:fb925707-f004-419d-830f-a0e650f94c00>
CC-MAIN-2022-40
https://www.enterprise-cio.com/news/2018/jan/15/why-iot-ddos-threats-continue-loom-emerging-new-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00513.warc.gz
en
0.938398
1,078
2.53125
3
As enterprises realize that data are among their most valuable assets, they are looking into technologies to protect them. In this direction they use novel data management technologies, including data tokens. Data tokenization is a powerful technology for information security and privacy protection. Its uniqueness is that it was designed specifically to protect the highest-value enterprise assets i.e., to ensure data security for sensitive enterprise data. Specifically, data tokenization is a process by which large amounts of data are broken down into smaller components known as tokens. The process also involves the recording of the data in an encrypted format, thus protecting them from entity level risks. Hence, data tokens can be used to control access to databases, automate business functions and streamline their compliance processes. In the scope of data tokenization encryption is used to replace sensitive data with a non-sensitive equivalent while still allowing the original, personally identifiable information (PII) to be accessible at the time of request. Data tokenization comes under threat mitigation strategies: It is often chosen to prevent or mitigate identity theft or insider threats in organizations like financial and healthcare institutions. For instance, tokenizing credit card numbers, driver’s licenses and other government identifiers protects against fraud and enables part of the identity to be shared safely with third parties such as marketing efforts, loyalty programs, discount cards or donations. Data Tokenization replaces the identifying information about any specific data element with a token that represents it. This means that the original data must be re-created to view and use it by whoever has access to the encrypted file. Data tokenization has several benefits, such as: Overall, data has become the prized asset, as business plans are shifting from simply revenue to data generation and usage. However, the main challenge that enterprises face is achieving proper security measures when operating with large data sets. Data tokenization is a relatively new concept that is gaining traction in the data security industry. Given the growing number of threats associated with big data, it would be prudent for companies to examine its merits and explore technologies for managing tokens effectively. Tokenization is a promising approach to boosting enterprise security, but many companies are still wary of adopting this new technology. They fear that replacing sensitive data with unique tokens will hinder business-critical processes like crafting Artificial Intelligence (AI) algorithms and maintaining databases. However, these fears are overwrought since tokenization does not hamstring the business. In fact, it enhances security while allowing companies to continue conducting business-as-usual. Therefore, enterprises should consider the deployment and use of data tokenization solutions as part of their security policies and their business strategies. In this direction, companies should evaluate novel approaches for implementing tokenization mechanism, including both conventional techniques and emerging approaches like blockchains and Non Fungible Tokens (NFTs). The benefits of cybersecurity mesh for distributed enterprises The Rising Cybersecurity Threats CIOs cannot afford to ignore Six Factors Affecting Security and Risk Management in the Post COVID Era Surviving Cybercrime in 2021: Guidelines for Effective Cybersecurity Investments Anti-Money Laundering in the Era of Digital Finance Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:b3f1803d-83e3-4f39-bfa3-6197e15e2555>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/advantages-of-data-tokenization-for-enterprises/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00513.warc.gz
en
0.927797
845
2.96875
3
Cloud computing is the delivery of hosted services, including software, hardware, and storage, over the Internet. The benefits of rapid deployment, flexibility, low up-front costs, and scalability, have made cloud computing virtually universal among organizations of all sizes, often as part of a hybrid/multi-cloud infrastructure architecture. Cloud security refers to the technologies, policies, controls, and services that protect cloud data, applications, and infrastructure from threats. Because the public cloud does not have clear perimeters, it presents a fundamentally different security reality. This becomes even more challenging when adopting modern cloud approaches such as automated Continuous Integration and Continuous Deployment (CI/CD) methods, distributed serverless architectures, and ephemeral assets like Functions as a Service and containers. Some of the advanced cloud-native security challenges and the multiple layers of risk faced by today’s cloud-oriented organizations include: The public cloud environment has become a large and highly attractive attack surface for hackers who exploit poorly secured cloud ingress ports in order to access and disrupt workloads and data in the cloud. Malware, Zero-Day, Account Takeover and many other malicious threats have become a day-to-day reality. In the IaaS model, the cloud providers have full control over the infrastructure layer and do not expose it to their customers. The lack of visibility and control is further extended in the PaaS and SaaS cloud models. Cloud customers often cannot effectively identify and quantify their cloud assets or visualize their cloud environmets. Cloud assets are provisioned and decommissioned dynamically—at scale and at velocity. Traditional security tools are simply incapable of enforcing protection policies in such a flexible and dynamic environment with its ever-changing and ephemeral workloads. Organizations that have embraced the highly automated DevOps CI/CD culture must ensure that appropriate security controls are identified and embedded in code and templates early in the development cycle. Security-related changes implemented after a workload has been deployed in production can undermine the organization’s security posture as well as lengthen time to market. Often cloud user roles are configured very loosely, granting extensive privileges beyond what is intended or required. One common example is giving database delete or write permissions to untrained users or users who have no business need to delete or add database assets. At the application level, improperly configured keys and privileges expose sessions to security risks. Managing security in a consistent way in the hybrid and multicloud environments favored by enterprises these days requires methods and tools that work seamlessly across public cloud providers, private cloud providers, and on-premise deployments—including branch office edge protection for geographically distributed organizations. All the leading cloud providers have aligned themselves with most of the well-known accreditation programs such as PCI 3.2, NIST 800-53, HIPAA and GDPR. However, customers are responsible for ensuring that their workload and data processes are compliant. Given the poor visibility as well as the dynamics of the cloud environment, the compliance audit process becomes close to mission impossible unless tools are used to achieve continuous compliance checks and issue real-time alerts about misconfigurations. The term Zero Trust was first introduced in 2010 by John Kindervag who, at that time, was a senior Forrester Research analyst. The basic principle of Zero Trust in cloud security is not to automatically trust anyone or anything within or outside of the network—and verify (i.e., authorize, inspect and secure) everything. Zero Trust, for example, promotes a least privilege governance strategy whereby users are only given access to the resources they need to perform their duties. Similarly, it calls upon developers to ensure that web-facing applications are properly secured. For example, if the developer has not blocked ports consistently or has not implemented permissions on an “as needed” basis, a hacker who takes over the application will have privileges to retrieve and modify data from the database. In addition, Zero Trust networks utilize micro-segmentation to make cloud network security far more granular. Micro-segmentation creates secure zones in data centers and cloud deployments thereby segmenting workloads from each other, securing everything inside the zone, and applying policies to secure traffic between zones. While cloud providers such as Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) offer many cloud native security features and services, supplementary third-party solutions are essential to achieve enterprise-grade cloud workload protection from breaches, data leaks, and targeted attacks in the cloud environment. Only an integrated cloud-native/third-party security stack provides the centralized visibility and policy-based granular control necessary to deliver the following industry best practices: Work with groups and roles rather than at the individual IAM level to make it easier to update IAM definitions as business requirements change. Grant only the minimal access privileges to assets and APIs that are essential for a group or role to carry out its tasks. The more extensive privileges, the higher the levels of authentication. And don’t neglect good IAM hygiene, enforcing strong password policies, permission time-outs, and so on. Deploy business-critical resources and apps in logically isolated sections of the provider’s cloud network, such as Virtual Private Clouds (AWS and Google) or vNET (Azure). Use subnets to micro-segment workloads from each other, with granular security policies at subnet gateways. Use dedicated WAN links in hybrid architectures, and use static user-defined routing configurations to customize access to virtual devices, virtual networks and their gateways, and public IP addresses. Cloud security vendors provide robust Cloud Security Posture Management, consistently applying governance and compliance rules and templates when provisioning virtual servers, auditing for configuration deviations, and remediating automatically where possible. This will granularly inspect and control traffic to and from web application servers, automatically updates WAF rules in response to traffic behavior changes, and is deployed closer to microservices that are running workloads. Enhanced data protection with encryption at all transport layers, secure file shares and communications, continuous compliance risk management, and maintaining good data storage resource hygiene such as detecting misconfigured buckets and terminating orphan resources. Third-party cloud security vendors add context to the large and diverse streams of cloud-native logs by intelligently cross-referencing aggregated log data with internal data such as asset and configuration management systems, vulnerability scanners, etc. and external data such as public threat intelligence feeds, geolocation databases, etc. They also provide tools that help visualize and query the threat landscape and promote quicker incident response times. AI-based anomaly detection algorithms are applied to catch unknown threats, which then undergo forensics analysis to determine their risk profile. Real-time alerts on intrusions and policy violations shorten times to remediation, sometimes even triggering auto-remediation workflows. Check Point’s unified CloudGuard cloud security platform integrates seamlessly with the providers’ cloud-native security services to ensure that cloud users uphold their part of the Shared Responsibility Model and maintain Zero Trust policies across all the pillars of cloud security: access control, network security, virtual server compliance, workload and data protection, and threat intelligence.
<urn:uuid:3871b17c-8977-4546-b726-66dc45e6474e>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/cloud-security/what-is-cloud-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00513.warc.gz
en
0.917707
1,476
2.640625
3
This article is essential for Network administrators who need to understand the ARP protocols, its types and usage in networking field. This becomes vital for those engaged in Layer 2 and Layer 3 of OSI model. ARP acts as key link across both the OSI layers hence it becomes imperative to understand the technology and its flavors. Broadly, ARP flavors have been enlisted below – - Gratuitous ARP When a computer in LAN needs to send data to another device (computer or Router etc.) it must first find the physical address (also called MAC Address) of the destination device. Generally the IP address of the destination device is known to source. This is where ARP comes to play. The ARP protocol will make a broadcast out to the network asking for the MAC address of the destination IP address. The machine with the IP address will respond with its MAC address. ARP’s job is to basically discover and associate IP addresses to physical MAC addresses. ARP is used for mapping a network address (e.g. an IPv4 address) to a physical address like an Ethernet address (also named a MAC address). The ARP Request will contain: - Source IPv4 Address; - Source data-link identifier address (MAC Address ) - Destination IPv4 Address; - Destination data-link identifier (MAC Address) ARP was defined by RFC 826 in 1982 Below is diagram of ARP packet format – - Hardware Type – this is 1 for Ethernet. - Protocol Type – the protocol used at the network layer. - Hardware Address Length – this is the length in bytes, so it would be 6 for Ethernet. - Protocol Address Length – For TCP/IP th value is 4 bytes. - Operation Code – this code indicates whether the packet is an ARP Request or an ARP Response. - Senders Hardware Address – hardware address of the source node. - Senders Protocol Address – layer 3 address of the source node. - Target Hardware Address – used in a RARP request, the response carries both the destination’s hardware and layer 3 addresses. - Target Protocol Address – used in an ARP request, the response carries both the destination’s hardware and layer 3 addresses. RARP is the opposite of ARP, it maps an IPv4 Address to a know MAC Address. Hosts like diskless workstations only have their hardware interface addresses, or MAC address, but not their IP addresses. They must discover their IP addresses from an external source, usually via RARP protocol. RARP is defined in RFC 903.The RARP uses the same packet format as the ARP and uses an Ethertype value of 0x8035 to indicate it being a RARP. RARP Request will contain: - Source and Destination data-link identifier (MAC Address in this example) will be the local host MAC Address; - Source and Destination IP Address will be set to 0.0.0.0. Proxy ARP a technique by which a Layer 3 device can respond to ARP requests for a destination which is not in same network in which sender resides.The Router configured for Proxy ARP can respond to the ARP and map the router’s MAC address with the destination IP address and fool the sending station that it has found its destination. AT the backend , the proxy router forwards the packets to the correct destination since it has the relevant information. For eg – Host A wants to send data to Host B which is not on that network, Host A sends an ARP to get a MAC address for host B. Router replies to Host A with its own MAC address addressing itself as destination, hence when the data is sent to the destination by Host A it would be sending to the gateway (as destination MAC is given as Gateway’s MAC) which would in-turn send to host B. This is called proxy arp. RFC 1027 describes Proxy ARP. Gratuitous ARP is an ARP request of hosts own IP address and is used to check for a duplicate IP address. If there is a duplicate address then the stack does not complete initialisation.Generally, hosts on a network will send out a Gratuitous ARP when they are booting up their IP stack Some of primary use case of Gratuitous ARP are below – - To update other devices ARP Table (when a device receives an ARP Request with an IP, the cache will be updated with the new information; - HSRP Routers becoming Master or Active will send Gratuitous ARP out the network to update the cache table of other devices ; - To check for duplicate addresses i.e. if the host receives a response, it´ll know that some other device is using the same IP Address
<urn:uuid:02dc37ab-6a96-4795-9a2b-a64a1d40ab35>
CC-MAIN-2022-40
https://ipwithease.com/types-of-arp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00513.warc.gz
en
0.910094
1,022
3.96875
4
It seems that nearly every week, we hear about another major business who’s just been a part of some data breach. Hackers have reached evidently secure information, often taking sensitive and financial information of employees and customers. The problem is that we don’t know what computer system attacks look like until after they occur. If we could miraculously know ahead of time, we’d be able to automate all security defenses, eliminating any need for humans to analyze security. Often, the only way companies can detect attacks, even after one has occurs, is by knowing and understanding what’s happening in logs and other secure areas. There are many software products and techniques that offer insight into the health of your company’s security, but, as we’ll see, these often provide only a narrow view of the whole picture. Security and Log Management An important first step in establishing a security analysis protocol is managing your logs. Logs are computer-generated messages that come from all sorts of software and hardware – nearly every computing device has the capability of producing logs. The logs show, in detail, the varied functions of the device or application, as well as when users log in or attempt to log in. These logs are often text-based, and they can be stored in local or remote servers. Logs are also known as audit records, audit travels, and event logs. Log management systems (LMS) can be used for a variety of functions, including: collecting, centrally aggregating, storing and retaining, rotating, analyzing, and reporting logs. Companies primarily opt to store logs for security purposes. Reviewing these logs, whether before or after a security breach, are important in showing whether someone is an internal employee or an outside threat. After all, network and system administrators could look like hackers, if looking solely at the actions they regularly perform. Regulation compliance and system and network management are also important reasons to maintain and manage logs. The importance of these logs isn’t in the logging itself. Instead, it’s the analysis of these logs is what provides value. Logs are often used to detect weaknesses in security, and forward-thinking companies who employ strategic security analysts often are able to find and address these weaknesses before breaches can occur. The larger the company, however, the more logs there are. In fact, companies can easily produce hundreds of gigabytes in logs per day! With this much data to sort through, several issues can impede manual log management, including: - Volume and velocity – simply too much content, too quickly to view, let alone analyze - Normalization – logs can vary in output format, so time may be spend normalizing the data output - Veracity – ensuring the accuracy of the output As the size of logs continues to grow, and companies becoming increasingly vigilant about security analysis, log management alone isn’t enough – it’s only a component of a holistic solution. Security Information and Event Management Any number of software offer a small window into the health of your security. For instance, an asset management system tracks only applications, business processes, and administrative contacts, and a network intrusion detection system (IDS) can only see IP addresses, packets, and protocols. Taken individually, these options cannot indicate what’s happening in real time to your network. Enter SIEM. Like log management, SIEM falls within the computer security field, and it includes both products and software that help companies manage security events and secure information. SIEM, though, is a significant step beyond log management. Experts describe SIEM as greater than the sum of its parts. Indeed, SIEM comprises many security technologies, and implementing SIEM makes each individual security component more effective. In effect, SIEM is the singular way to view and analyze all of your network activity. The term, coined in 2005, originates from and builds on several computer security techniques, including: - Log management (LM), as previously described, which collects and stores log files from operating systems and applications, across various hosts and systems. - Security event management (SEM), which focuses on real-time monitoring, correlating events, providing overarching console views, and customizing notifications. - Security information management (SIM), which provides long-term storage, analysis, manipulation, and reporting on logs and security records. - Security event correlation (SEC), which tracks and alerts designated administrators when a peculiar sequence of events occurs, such as three failed login attempts under the same user name on different machines. Taken individually, these techniques cannot indicate what’s happening in real time to your network. By combining the best of these techniques, however, SIEM provides a comprehensive approach to security. Vendors may sell these as products and/or managed services, along with other security-related components. The most well-rounded SIEM products are those with the following capabilities: - The aggregation, analysis, and reporting of log output from networks, operating systems, databases, and applications - Applications that verify identities and manage access - Vulnerability management and forensic analysis - Policy compliance - External threat notifications - Customizable dashboards Benefits of SIEM Like log management, the goal of SIEM is security – and it is only as good as the data it accesses. But advantages of a SIEM approach are its real-time analysis and connecting disparate systems in order to unify the information in one console. In essence, SIEM provides a wide, yet detailed view into your company’s security. SIEM means your security analysts can continue doing what they do best – analyzing security in real-time – instead of spending time learning every single product under the security umbrella.
<urn:uuid:90f237b0-1fa5-4b12-a3c0-3c206c989339>
CC-MAIN-2022-40
https://www.bmc.com/blogs/siem-vs-log-management-whats-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00513.warc.gz
en
0.946017
1,183
2.859375
3
Originally posted on Data Center Post On August 14, 2014 A computer room air conditioning (CRAC) unit is a device that monitors and maintains the temperature, air distribution and humidity in a computer room or data center. As more equipment is squeezed into IT spaces, it is more important than ever to ensure that each component of the support infrastructure is operating at maximum efficiency and reliability. The failure of a CRAC unit can lead to downtime, which of course means a loss of service and that can translate into a loss of money and often times a loss of customers. There are a few important questions that facility and data center managers need to ask themselves to make sure they are on the right track in terms of timely and proper computer room air conditioning (CRAC) equipment maintenance; these include: What maintenance needs to be done on the equipment? When? How often? The need for reliable backup electric power in data centers, healthcare facilities, telecom sites and other critical facilities is too important to leave to chance. In emergency power generation applications, a gen-set must be ready to run during very critical times – to do this you need to send clean fuel to the engines. Emergency data center power generators that rely on diesel fuel are at constant risk of unexpected failure due to clogged fuel filters.
<urn:uuid:02703863-1973-496b-b11a-70f025ebf452>
CC-MAIN-2022-40
https://blog.eecnet.com/eecnetcom/topic/data-center-preventive-maintenance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00713.warc.gz
en
0.959076
260
2.6875
3
When it comes to networking technology, we won’t miss Ethernet switches. Because it is an essential part of networking communication. Connecting devices, such as computers, routers, and servers, together on a network, it enables current to be turned on and off and selects a channel for data transmission. Then you may ask, what is the switch definition in networking? What are the types of switches in networking? How to choose a switch for my network? Now, this passage will give you answers and suggestions. What Is the Switch Definition In Networking There is one question that confuses many people: what is a switch in networking? A switch, in the switch definition of networking, is high-speed network equipment used to connect devices together on a network and enable the data transmission between different devices. It receives incoming data packets and redirects them to their destination on a local area network (LAN). In a local area network (LAN) using Ethernet, a network switch determines where to send each incoming message frame according to the physical device address. This kind of address is also known as the Media Access Control address or MAC address. If a switch needs to forward a frame to a MAC address that is unknown by the switch, then the frame is flooded to all ports in the switching domain. Generally speaking, a data switch can create an electronic tunnel between the source and the destination ports that no other traffic can enter for a short time. Switch Definition In Networking: Types of Switches In Networking The Ethernet switch is an essential part of any network. Generally speaking, the Ethernet switch can be classified into two categories: the modular switch and the fixed switch. The modular switch has expansion ability and high flexibility. Modular switch makes it possible for you to add expansion modules as needed into the switches. It is much more complex than fixed switch, so it costs more than fixed switch. The fixed switch isn’t expandable and has a fixed number of ports. Although it has less flexibility, it offers a lower entry cost. There are mainly three types of fixed switches in networking. They are the unmanaged switch, the smart switch, and the managed switch. The unmanaged switch is often used in home networks, small business offices or shops. It can’t be managed, so we can’t enable or disable interfaces of it. Although it doesn’t provide security features, it can offer enough support if you use it in a small network of fewer than 5-10 computers. The smart switch is mainly used for business applications such as smaller networks and VoIP. It is suitable for small VLANs, VoIP phones, and labs. Smart switch can let you configure ports and set up virtual networks but doesn’t have the ability to allow troubleshooting, monitoring, remote-accessing to manage network issues. The managed switch is widely used in data centers and enterprise networks. It provides control, high-levels of network security, and management. It’s ideal for remote-access control capabilities and off-site round-the-clock monitoring. The managed switches can improve a network’s resource utilization and speed. Although it costs the most, it worth the investment for a long run. How to Choose a Switch For Your Network? When you choose a switch for your network, you need to consider several factors at the same time. These factors include the number of ports, transmission speed, and stackable vs standalone. Most of the switches on the market have 4 to 48ports. You need to consider the number of ports you’ll need according to the number of users and devices and devices your network supports. The larger your organization is, the more ports you’ll need. Considering the possible expansion of your network and the possible increase of your user amount, you need to prepare extra ports for a long term plan. There are various switches with different speeds, such as Gigabit Ethernet switch and 10GbE switch used at the edge of the network, as well as 40GbE switch and 100GbE switch used in the network core layers. When you determine the speed, the key factor to consider is the need for your network users and future growth. Such as how large are the volumes of the transferring data and whether do you require a faster link. Will your network grow larger? If your answer is yes, then you may choose a stackable switch. Standalone switches need to be configured individually, and troubleshooting also needs to be handled on an individual basis. While stackable switches allow for multiple switches to be configured as one entity. With this advantage, you can save time and energy when you manage on the stackable switches. Here I want to recommend you FS.COM S3900 switches, which are stackable switches. The following video is a tutorial about how to stack switches using S3900 switches. In the above passage, we’ve explained how people define switch in networking and analyze the types of switches. Besides, this article offers some suggestions about how to choose a switch for your network. I believe that you have got a general idea about switch definitions in networking. If you need a little more help and advice with switch definition in networking, then please do not hesitate to let us know. For purchasing a high-quality switch with a low cost or for more products’ information, please contact us at firstname.lastname@example.org.
<urn:uuid:6532f831-80dd-4122-ad3d-0afc00ad0961>
CC-MAIN-2022-40
https://www.fiber-optic-components.com/tag/10gbe-switch
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00713.warc.gz
en
0.930686
1,110
3.578125
4
Sometimes referred to as identity fraud, identity theft occurs when someone uses your personal information — such as your Social Security Number or driver’s license — to impersonate you. They may take over your open bank accounts, file fake tax returns, attempt to rent or buy properties, or do other kinds of illegal activities in your name. Identity theft is one of today’s many security concerns, both on and offline. Identity Theft Statistics Identity theft is, unfortunately, quite common. Below are some statistics on the most common types of identity theft over the last several years: - The number of credit card numbers exposed in 2017 totaled 14.2 million, up 88% over 2016 - Nearly 158 million Social Security numbers were exposed in 2017, an increase of more than eight times the number in 2016 - For 85% of identity-theft victims, the most recent incident involved the misuse or attempted misuse of only one type of existing account, such as a credit card or bank account - Children are often a target of identity thieves, because kids are unfamiliar with online safety, have Social Security Numbers, and have no credit history. Signs of Identity Theft There are several warning signs you should take notice of that may indicate that someone has stolen your identity. Note that the following things don’t automatically indicate that your identity was stolen, but if one or more of these things happen to you, it’s important to take a minute to make sure everything is alright. Unfamiliar Charges on Your Card If you see any charges on your credit or debit card statements that you don’t remember, seem strange or know you didn’t make, it could mean someone has compromised your identity. Don’t brush off unfamiliar charges! Some thieves make small purchases to see if they can get away with it, before moving onto to bigger ones. Calls from Debt Collectors If you receive calls or notices from debt collectors, but know that you don’t owe anything, it could mean that an identity thief stole your financial information and is purchasing things in your name without intending to pay for them. Unfamiliar Bills or Statements If you’ve been receiving statements for bills you owe or accounts you didn’t open; a thief may have stolen your identity and then opened a credit card or other account in your name. Missing Email or Mail If you’re missing important emails or mail, such as your bank statement or credit report, it could mean that a thief has stolen your identity. With the right information, they can change the mailing address on your account. If an identity thief compromises your email account, they can then access all other accounts linked to it. There are online solutions and mobile applications, like MySudo, that separate your online profiles so that if one is compromised, not all of your accounts are at risk. How to Prevent Identity Theft It may seem like identity theft is not a matter of “if” it happens, but “when” — but it isn’t inevitable. You should take the necessary precautions to avoid it. Safeguard Sensitive Information Always keep your sensitive information as private as possible. Don’t share your Social Security Number, driver’s license number, PIN number, passwords or any other personal information unless absolutely necessary. If you’re required to provide this information, such as doing a walk-through on an apartment you are interested in, question if they truly need it before giving it out. Monitor Your Finances Keep a close eye on all of your finances: your credit card reports, bank statements, and various accounts. If you know the state of your finances, you’ll know when something is wrong. Use Separate Profiles and Accounts Do not link up all of your online activity; if you do and one account is compromised, none of your other accounts are safe. MySudo can generate entirely distinct email addresses and phone numbers, so when you sign up for different services, you’re better protected. Create Stronger Passwords Use password best practices at all times — update them regularly, use a unique password for each account, and make your passwords as strong as possible. This will make it more difficult for anyone who might want to infiltrate your various online accounts. Be as careful as possible and pay close attention to all of your sensitive information and private accounts. If you know that your data was compromised — perhaps because of a data breach at a store you frequent — be even more alert. Know that despite your best efforts, identity theft still happens, and there are steps you can take to fix the situation if that does happen. The sooner you know there’s a problem, the sooner you can fix it. In an age of increasingly large and frequent data breaches and compromised personal information, identity theft may feel like an inevitability to some people, even though it doesn’t have to be. Get familiar with products like MySudo to help keep your personal information secure and in the hands of the people it should be in – yours.
<urn:uuid:07ac7f2b-230b-4934-b8e7-150ea818bdfc>
CC-MAIN-2022-40
https://mysudo.com/2019/09/identity-theft-definition-identity-theft-prevention-mysudo/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00713.warc.gz
en
0.930695
1,060
2.90625
3
The Android Operating System (OS) is one of the most prominent IT developments of our time, as it is the OS that supports the most popular platform for mobile devices. Contrary to its main competitor (i.e. Apple’s iOS operating system), Android is open source, which provides opportunities for creating many different customized versions of the operating system and allows companies to modify it as they wish. As a prominent example, device vendors like Samsung, HTC and Huawei are creating their own versions of the Android operating system by adding custom skins and associated user interfaces. In this way, they provide their own “look and feel”, which accompanies their devices and is in-line with their branding strategy. Apart from User Interface changes, Android can be flexibly customized in terms of the features that it offers to its users. This has given rise to different flavors of Android such as Android Stock, Android One and Android Go. Stock Android is the most basic version of the operating system, which corresponds to the flavor originally designed and implemented by Google. In other words, it is an unmodified version of Android, which some device manufacturers opt to install “as-is” on their devices. In mobile platforms jargon, Stock Android is also called “vanilla” or “pure” Android. While a customization possibility is always appealing, having the original version with just the features developed by Google has several advantages over customized versions, including: Despite these advantages of Stock Android, some people argue that they would go for a customized version, simply for reasons of accessing a friendlier user interface with better aesthetics and ergonomic features. Nevertheless, this is not always the case. There are times where the simple and minimalistic, yet functional design of stock Android is preferred from more complex user interfaces. While this ends up being a matter of personal taste, we can safely assume that novice users will find it much easier to learn and navigate in a stock Android environment, rather than in a more colorful, yet complex customized environment. The above-listed advantages of stock Android are the reason why it is used by device manufacturers, beyond Google. For example, Nokia, Lenovo, and Essential provide devices that are based on stock Android versions. Android One is a Google’s program aimed at providing a stock version for smartphones like HTC U11 Life and Xiaomi Mi A1. It was originally inspired as an Android version that would target emerging markets (e.g., India, Indonesia, the Philippines, Nigeria, the Middle East, as well as other countries in Africa and South Asia), based on a basic set of capabilities that would run on simpler phones. Specifically, the motivation behind Android One was to facilitate manufacturers to develop low-cost, yet reliable devices. To this end, Android One emphasizes providing stock Android experience, without add-ons. Beyond this initial motivation, Android serves some of Google’s current business goals, including: For these reasons, Android One is now available in mature markets including the USA. Moreover, there is a significant number of devices that are enrolled in the Android One programming, including devices such as Xiaomi Mi A1, Moto X4, GM5 and GM5 Plus, HTC U11 and more. Android One can be seen as an interim step toward supporting entry-level or low-budget smartphones. It is now replaced by Android Go (or Android Oreo, Go edition), which is a cut down version of the OS that is destined to run on entry-level smartphones. Its main components include the OS, the Google Play Store, and various Google apps, which are provided in a way that ensures a better experience in resource-constrained devices. Android Go targets smartphones with 512 MB to 1 GB of RAM, while occupying less space than other flavors of the OS. As such it is suitable for devices that have 8 -16 GB storage like most low-budget devices nowadays. Optimizations do not stop in storage and memory: Android Go can run apps faster than other versions of the OS and optimizes data communications in order to allow for lesser consumption of mobile traffic. In terms of apps, Android Go optimizes the way they use the devices’ resources, notably the available memory. The Go flavor comes with only nine (9) preinstalled apps, which include the “reduced” Go versions of Google’s popular applications like YouTube, Google Assistant, Google Maps, Gmail, Chrome and Google Play. Accessing these apps through their Go versions means that they can run faster, yet they may be lacking some features available in the fully-fledged versions of the apps. Android Go has created a new market segment of mobile apps developers, notably a segment focusing on optimized apps for resource-constrained devices. The main philosophy behind this market lies in the development of apps that sacrifice some side-functionalities for the sake of resource usage efficiency. Android’s open source nature creates unprecedented opportunities for innovating at the OS level to the benefit of all stakeholders i.e. end-users, device manufacturers and Google itself. This is evident in our earlier analysis of the various flavors of Android. We really hope that this analysis could help you understand how you could benefit from selecting the most proper version and device for your tasks at hand. The different dimensions of Mobile App Testing Mobile Apps of 2017: Our winners! Is it time for a Mobile first strategy? At the Crossroads of Mobility: HTML5, Native or Hybrid The 4 Pillars of Custom Mobile App Strategy for Small Businesses Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:73b06aa1-fe6a-4417-86df-cc82c004e87a>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/android-stock-one-or-go/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00713.warc.gz
en
0.933229
1,354
2.65625
3
What is data theft? Data theft refers to the act of illegally obtaining digital information from an organization for financial gain or with the intent to sabotage the business' operations. Adversaries or even malicious employees can steal corporate data from secured file servers, database servers, cloud applications, or even from personal devices. There is a huge market for stolen personal data such as phone numbers, credit card information, work email addresses, and much more, which keeps malicious insiders and hackers motivated. Data theft examples Check out the top eight data breaches in recent history where millions of customers' personal data was exposed and the organizations faced severe backlash. |CAM4||March 2020||10.88 billion records stolen||An employee misconfigured the Elasticsearch production database, leaving it vulnerable.| |Yahoo||October 2017||3 billion records stolen||A phishing scheme was used by the perpetrators to gain access to Yahoo's network.| |Indian government (Aadhaar data leak)||March 2018||1.1 billion records stolen||India's national ID database was left exposed when a state-owned utility company left its network unsecured.| |June 2021||700 million records stolen||A hacker named God User scraped the data by exploiting LinkedIn's API. The data was put up for sale on the dark web.| |Marriott (Starwood)||November 2018||383 million records stolen||Hackers probed and infiltrated Marriott's reservation system to steal customer data.| |Myspace||June 2013||360 million records stolen||Perpetrators obtained user data by taking advantage of the obsolete password protection system that used unsalted SHA-1 hashes.| |SocialArks||January 2021||214 million records stolen||A misconfigured Elasticsearch database left the server exposed online, leaving customer data without password or encryption protection.| |Equifax||September 2017||148 million records stolen||Hackers exploited an unpatched vulnerability dubbed CVE-2017-5638 to hack into Equifax's customer complaint web portal.| Impact of data theft All data theft has devastating consequences. It leaves severe financial, operational, and reputational scars on a business. Most businesses that fall prey to data theft experience: Crippling compliance penalties Most data theft exposes the organization's non-compliance to data security mandates. Data protection authorities like those overseeing GDPR and HIPAA compliance penalize such negligence with steep fines. Loss of reputation Customers tend to lose trust in organizations that fall victim to data theft attempts. The damage to the brand name will last and it might take years for the organization to rebuild. Most organizations will go into damage control mode following data theft, bringing routine operations to a standstill until the damage is fully analyzed. This loss of productivity can result in huge financial repercussions. Prolonged forensic analysis Data theft is immediately followed by an in-depth forensic investigation by the organization looking into the origin of the breach, its impact, and more. Data theft types Data theft can be broadly classified into two categories, i.e., those caused by internal and external threats. Data theft by insiders Employees harboring illicit motives can attempt to steal sensitive personal data stored via USBs, email, and much more. Aside from willful insiders, it's negligent and careless employees who are the major cause of data breaches. These employees fall prey to phishing tricks and spam campaigns, or leave their critical server unsecured or misconfigured. Data theft by outsiders Digital criminals are always on the lookout to exploit and thieve organizations with obsolete data protection standards, unpatched system vulnerabilities, and misconfigured cloud storage. They launch ransomware attacks, malvertisement campaigns, man-in-the-middle attacks, and more to infiltrate the organization's network. Best practices to prevent data theft Here are the most common best practices that an organization needs to exercise to reduce the risk of data theft. - Control device usage by enforcing stringent endpoint security measures, enable safe usage of USBs, monitor data transfers, and much more. - Enforce the principle of least privilege (POLP) using an access management solution that will limit unwanted accesses to your sensitive information. - Monitor employee activities to keep track of employee file accesses and modifications pattern. Detect sudden anomalies in employee behavior to thwart potential data thefts. - Educate your end users regarding the various data security protocols to be followed and the consequences of violating them. - Perform routine penetration testing to assess your critical systems for vulnerabilities and strengthen your organization's security posture. - Deploy a fully integrated DLP solution that can locate, classify, and secure the use of sensitive personal data (PII/ePHI/PCI) in your organization.
<urn:uuid:765d5c09-5df4-47c0-98bd-ed5a16e15dcf>
CC-MAIN-2022-40
https://www.manageengine.com/data-security/what-is/data-theft.html?source=what-is
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00713.warc.gz
en
0.891938
1,011
3.171875
3
A new survey has revealed that a majority of US teens have had a good experience on social media platforms. According to a study conducted by the Pew Internet and American Life Project, the Family Online Safety Institute, and Cable in the Classroom, about 93 percent of teens that use social media have an account on Facebook, according to a report (opens in new tab) on website ars technica. Although many of the teenagers said they had a good experience on the platform, 22 percent of the respondents said a bad social media experience ended their friendship with someone. Meanwhile, 25 percent said that they had a social media experience that had led to a real-world argument with someone. The survey also revealed that 8 percent of the teenagers had been bullied online, while 15 percent had been bullied via text messages. "A Facebook profile can be the site of a budding romance or the staging ground for conflict," the survey says. "In the past, mediated interactions might have taken place via paper letter or set of wires and a phone between the conversing partners. Now, all Internet users have access to a broader digital audience. And in this new environment, social norms of behaviour and etiquette are still being formed," it added.
<urn:uuid:4880c98f-3552-43a1-ab82-f84b474459a6>
CC-MAIN-2022-40
https://www.itproportal.com/2011/11/09/social-media-experience-overall-good-according-most-us-teens/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00713.warc.gz
en
0.972004
250
2.859375
3
💡 = Recently Updated - 2018 Dec 26 – complete proofread, revised, and expanded - 2018 Dec 24 – renamed NetScaler to Citrix ADC Citrix ADC is NetScaler Citrix renamed their NetScaler product to Citrix ADC. ADC is a Gartner term that means Application Delivery Controller, which is a fancy term that describes a load balancing device that does more than just load balancing. This article assumes that you have already read the content in Part 1 – Request-Response, HTTP Basics, and Networking SSL/TLS Protocol – SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are two names for the same encrypted session protocol. SSL is the older, more well-known name, and TLS is the newer, less well-known name. The names can usually be used interchangeably, although pedantic people will insist on using TLS instead of SSL. In Citrix ADC, you’ll mostly see the term SSL instead of TLS. - HTTP on top of SSL/TLS – HTTP itself does not support encryption. Instead, a Layer 6 SSL/TLS-encrypted Session is established between the Web Client and the Web Server. Then Layer 7 HTTP packets are sent across the Layer 6 SSL/TLS session. (image from wikimedia) - SSL/TLS can carry more than just HTTP – for example, Citrix ICA Protocol is carried across an SSL/TLS session when ICA traffic is proxied through Citrix Gateway. There is no HTTP in this encrypted traffic. ICA and HTTP are two completely different protocols. Underneath both of them are the same Layer 6 SSL/TLS session protocol. - HTTPS Protocol is the name that web browsers use to describe HTTP running on top of an encrypted TLS/SSL session. It’s the same HTTP data whether the underlying TCP connection is encrypted or not. - Port 443 – HTTP over SSL/TLS uses a different port number than unencrypted HTTP. The SSL/TLS port number for HTTP data is usually TCP port 443, and is referred to as the https port number. The https port number will not accept HTTP Packets until the SSL/TLS encrypted session is established first. - Web Server support for https – Web Servers must be explicitly configured to accept https traffic. Enabling SSL/TLS on a web server requires creation of a key pair, a certificate, and binding them to a TCP 443 listener. No certificate configuration is needed on the client side. Below is a screenshot of enabling https on an IIS Web site. - https URL scheme– Users enter https://FQDN into a web browser’s address bar to connect to a web server using the SSL/TLS protocol on TCP 443. - Disable http? – Once https is enabled on a web server, you can optionally disable clear-text HTTP over TCP 80. Or you can leave the TCP 80 listener enabled and configure it to redirect unencrypted HTTP TCP 80 connections to https TCP 443 encrypted connections. - See SSL Redirect – Methods for information on how to configure an ADC application to redirect HTTP port 80 to HTTPS port 443. SSL/TLS versions – There are several versions of the SSL/TLS protocol. Here are the versions from oldest to newest: SSLv2, SSLv3, TLSv1.0, TLSv1.1, TLSv1.2, and TLSv1.3 - After SSLv3, the protocol was renamed to Transport Layer Security (TLS). All versions of TLS are newer than the SSL versions. - Many networking products use the term SSL to refer to all versions of the protocol, including the TLS versions. For example, in Wireshark, the traffic filters use the word “ssl” instead of “tls”. Citrix ADC hosts “SSL” Virtual Servers, but not “TLS” Virtual Servers. - TLSv1.3 is very new and only recently started being added to products. Citrix ADC 12.1 added support for TLSv1.3. - TLSv1.2 is the current widely-deployed standard. - In late 2018 and early 2019, industry security standards are starting to dictate that TLSv1.0 and TLSv1.1 should be disabled in all products and servers. PCI compliance already dictates that TLSv1.0 and TLSv1.1 must be disabled. - SSLv3 is an old, vulnerable protocol, and must be disabled on all SSL Virtual Servers. - Default Citrix ADC SSL configuration enables SSL protocol versions SSLv3, TLSv1.0, TLSv1.1, and TLSv1.2. One of the first configuration steps that should be performed on every ADC SSL Virtual Server is to disable SSLv3, disable TLSv1.0, and disable TLSv1.1. - Newer builds of Citrix ADC let you configure SSL defaults at the global level by enabling the “default SSL profile”. See SSL Profiles for details. - SSLLabs.com can check your SSL Listener to make sure it adheres to the latest SSL security standards. SSL Performance Cost – First, the SSL Client and SSL Server create an encrypted SSL Session by performing an SSL Handshake. Then HTTP is transmitted across this established SSL Session. - SSL Handshake is expensive – Establishing the SSL session (SSL Handshake) is an expensive (CPU) operation, and modern web servers and web browsers try to minimize how often it occurs, preferably without compromising security. (image from wikimedia) - Bulk encryption – The traffic on top of the established SSL Session is bulk encrypted, which has far less impact than initial SSL Session establishment. - ADC Appliance SSL Specs – The Citrix ADC appliance model data sheets provide different numbers for SSL Transactions/sec (initial session establishment) and SSL Throughput (bulk encryption). Public/Private key pair – A key pair is called a pair because the Public Key and Private Key are cryptographically linked together. Data encrypted by a Public Key can only be decrypted by one Private Key. Data encrypted by the Private Key can only be decrypted by one Public Key. - Asymmetric Encryption – traffic encrypted by a Public Key cannot be decrypted by the same Public Key. Any traffic encrypted by the Public Key can only be decrypted by its paired Private Key. This is called Asymmetric because you encrypt with one key and decrypt with a different key. (image from wikimedia) - Private key – The Private Key is called private because it needs to remain private. You must make sure that the Private Key is never revealed to any unauthorized individual. If that were to occur, then the unauthorized person could use the Private Key to emulate the web server, and unsuspecting users might submit private data, including passwords, to the hacker. - Hardware Security Module (HSM) – if you store the Private Key on an HSM, then it is not possible to export the private key from the HSM and thus nobody can see it. Any use of the private key occurs inside the HSM. Governments and other high security industries require private keys to be stored in HSMs. - Public key – The Public Key is called public because it doesn’t matter who has it. The public key is worthless without also having access to the private key. Key size – When you create a public/private key pair, you specify the key size in bits (e.g. 2048 bits). The higher the bit size, the harder it is to crack. However, larger key sizes mean exponentially more processing power required to encrypt and decrypt. 2048 is the current recommended key size, even for Certificate Authorities, which use the same key pair for many years. 2048 balances security with performance. Symmetric Encryption – With Symmetric Encryption, one key is used for both encryption and decryption. Symmetric Encryption is far less CPU intensive than Asymmetric Encryption. In https protocol, bulk data encryption and decryption is performed by a symmetric key, not asymmetric keys. But a challenge with Symmetric Encryption is how to get both sides of the connection to agree on the one symmetric key. - During the initial SSL handshake process, the Web Server’s public key is transmitted to the SSL Client. - During the initial SSL Handshake, the SSL Client generates a Session Key for symmetric encryption, encrypts the generated Session Key using the Web Server’s Public Key, and then sends the encrypted Session Key to the Web Server. The Web Server uses its Private Key to decrypt the Session Key. Now both sides have the same Symmetric key and they can use that Symmetric Key for bulk encryption and decryption. Session Key size – the Session Key is much shorter than the private/public key pairs. For example, the Session Key can be 256 or 384 bits, while the public/private key pairs are 2048 bits. Because of their shorter size, Session Keys are much faster at cryptographic operations than public/private key pairs. The length of the Session Key depends on the negotiated cipher, as detailed later. - Renegotiation – SSL Clients and SSL Servers will sometimes want to redo the SSL Handshake while in the middle of an SSL Session. This is called Renegotiation. - Because the Session Key is relatively small, a new Session Key needs to be regenerated periodically (e.g. every few minutes or hours). Renegotiation is how the new Session Key is transmitted from one side of the connection to the other side. - Without Forward Secrecy, if you take a packet trace, you can use the SSL Server’s Private Key to decrypt Session Keys in the packet trace, and use those decrypted Session Keys to decrypt the rest of the packet trace. - With Forward Secrecy, even if the hacker had access to the server’s Private Key, the Private Key cannot be used to decrypt the Session Key, and thus the packet trace cannot be decrypted. - DH vs RSA – Diffie-Hellman (DH) Key Exchange algorithm enables Forward Secrecy. RSA Key Exchange does not. ECDHE is the modern version of DH Key Exchange that is preferred by security professionals. To achieve Forward Secrecy (strongly recommended), prioritize ciphers that have ECDHE or DHE in their cipher names. Avoid RSA ciphers. - Troubleshooting – if DHE ciphers are used, then a network administrator cannot decrypt a packet trace. Citrix ADC has some packet trace options that can save the traffic without encryption (SSLPLAIN), or it can save the SSL Session Keys in a file separate from the packet trace. Wireshark can use the Session Keys file to decrypt the packet trace. Server Certificate – every Web Server that listens for SSL/TLS traffic must have a Server Certificate. Certificates are small text files that contain a variety of information including: the web server’s FQDN, the web server’s public key, the certificate’s expiration date, and a certificate signature created by a Certificate Authority. - Keys and certificates are different things – The public/private key pair, and the server certificate, are two different things. First, you create a public/private key pair. Then you create a certificate that contains the public key. - UNIX Key files and Certificate files – Citrix ADC is UNIX-based, which means that keys and certificates are stored in separate files. - Windows private keys – Windows stores the private key separately from the certificate, but the location of the private key is not easily reachable by a Windows administrator. When you double-click a certificate on Windows, at the bottom of the certificate window is a message indicating if Windows has a separate private key for that certificate. To get access to the private key, you export the certificate with private key to a .pfx file. The password-protected .pfx file contains both the certificate and the private key. SSL Handshake and Certificate Download – during the initial SSL Handshake, the Server’s Certificate is downloaded to the SSL Client. The SSL Client extracts the web server’s public key from the downloaded server certificate. Certificates provide a form of authentication – The SSL Client uses the server’s certificate to authenticate the SSL-enabled web server so that the SSL Client only sends confidential data to trusted web servers. There are several fields in the SSL Server’s certificate that clients use to verify web server authenticity. Each of these fields is detailed later in this section. - Subject and/or Subject Alternative Name must match the hostname entered in the browser’s address bar. - CA Signature – the server’s certificate must be signed a trusted Certificate Authority. SSL Clients have a mechanism for trusting particular Certificate Authorities. - Validity Dates – the certificate must not be expired. - Revocation – the certificate must not be revoked. Types of certificates – there are different types of certificates for different use cases. All certificates are essentially the same, but some of the certificate fields control how a certificate can be used: - Server Certificates – when linked to a private key, these certificates enable encrypted HTTP. - Certificate Authority (CA) Certificates – used by a SSL client to verify the CA signatures contained within a Server Certificate. - Client Certificates – used by clients to authenticate the client machine or user to the web server. Requires a certificate and private key on the client side. - SAML Certificate – self-signed certificate exchanged to a different organization to authenticate SAML Messages (federation). - Code Signing Certificate – developers use this certificate to digitally sign the applications they developed. Digital Signatures – Signatures are used to verify that a file has not been modified in any way. A Hashing Algorithm (e.g. SHA256) produces a hash of the file. A Private Key encrypts the hash. When a machine receives the signed file, the receiving machine generates its own hash of the file. Then it decrypts the file’s signature using the signer’s Public Key and compares the hash in the file with the hash that the receiving machine generated. If the computed hash matches the hash in the signature, then the file has not been modified. - The machine that received the file must have access to the Public Key that is paired with the Private Key that signed the file at the sender. These Public Keys are usually distributed through certificates. Server Certificate Signature – Server certificates are digitally signed by a trusted third party, formally known as the Certificate Authority (CA). This third party verifies that the organization that produced the server certificate actually owns the server that is hosting the certificate. - Publicly-signed Certificates are not free – For public CAs, the owner of the server certificate pays the public CA to sign the server’s certificate. - CA uses the CA’s Private Key to sign the server certificate – The CA generates a hash of the server certificate and signs the hash using the CA’s Private Key. - Verify CA signature – The SSL Client extracts the Issuer field from the server certificate and matches the Issuer name with one of the CA certificates installed on the SSL Client machine. The SSL Client then extracts the public key from the locally installed CA certificate and uses that CA’s public key to verify the signature on the server certificate. If the hashes match, then the server certificate is “trusted”. CA certs are installed on SSL clients – CA’s pay Microsoft, Chrome, Mozilla, Apple, etc. to install their Root CA Certificates on client machines. - Out-of-band installation – Root CA Certificates are always installed out-of-band and are not delivered by the web server during the SSL Handshake. If a CA certificate is missing from the SSL Client machine, then the user must install it manually by using a local Certificates management console, or through some other administrator-controlled process (e.g. group policy). - Trust is defined by locally installed CA certificates – Any server certificate signed any of the CA certificates installed on a SSL Client machine are automatically trusted. - Stop trusting one CA – Sometimes a Certificate Authority is compromised, or doesn’t do a good enough job of verifying server certificate ownership. To stop trusting certificates signed by a bad CA, simply uninstall the CA’s certificate from the SSL Client machine. Private Certificate Authority – instead of paying for a public certificate authority to sign your server certificates, you could easily build your own private Certificate Authority. Windows Servers have a built-in role called Certification Authority. Or you can use your Citrix ADC as a private certificate authority. - Private CA Root Certificate Installation – the Root CA certificate for the private certificate authority is not installed on client machines by default. Instead, the administrator of the client machines must distribute the private CA Root Certificate using group policy or some other out-of-band installation method. - Private CA-signed server certificates are commonly used on internal servers where administrators have full control of the client machines. But if any non-managed client machine needs to trust your private CA-signed server certificates, then it’s usually easier to purchase a public CA-signed server certificate because the public CA root certificates are already installed on all client machines. - Cannot purchase public CA-signed server certificates for non-routable DNS names – each server certificate contains the FQDN that users use to access the web server. If the server’s FQDN ends in one of the Internet DNS Top Level Domains (TLD), like .com, then you can pay a public CA to sign your certificate. However, if the FQDN ends in a private DNS suffix, like .local, then you cannot purchase a public CA signature and instead you must build your own private CA to sign the server certificate. Self-signed certificate – Instead of using a third party CA’s private key to encrypt the certificate file hash, it’s also possible for a certificate to use its own key pair to sign its own certificate. In this case, the Issuer of the certificate and the Subject of the certificate are the same and this is called a Self-signed certificate. Most web browsers will not accept self-signed server certificates for SSL/TLS connections. But Self-signed certificates are useful for other purposes, including Root CA certificates, and SAML certificates. - Self-signed certificates and internally-signed certificates are two different things – If you build your own private Certificate Authority and use the private CA to sign your internal server certificates, then the internal server certificates are not self-signed, but rather they are “internally-signed” or “private-signed”. Self-signed certificates have the same value for Issuer and Subject. “private-signed” certificates have different values for Issuer and Subject because the Issuer is a private CA and not itself. CA Chain – The server certificate can be signed by one CA certificate, or the server certificate can be signed by a chain of CA certificates. - Root CA Certificate – the top of the CA certificate chain is the Root CA certificate. The Root CA certificate is self signed. If the Root CA certificate is installed on a SSL Client machine, then all CA certificates that chain to the root are trusted. - Intermediate CA certificates – In the CA Signature Chain, between the Server Certificate and the Root CA Certificate, are Intermediate CA Certificates. The Root CA certificate almost never directly signs server certificates. Instead, the Root CA certificate signs an Intermediary or Issuing CA Certificate and the Issuing CA Certificate signs the server certificate. - Intermediate CA certificates are not installed on SSL client machines – Instead, Intermediate CA certificates must be transmitted to the SSL Client from the SSL Server during the SSL Handshake. - If Intermediate CA certificate is not transmitted during SSL Handshake, then CA chain is broken – when the SSL Client receives the server certificate, it extracts the server certificate’s Issuer field and looks for a CA certificate that matches that Issuer name. Since the server certificate’s Issuer is an Intermediate CA certificate and not a Root CA certificate, and since intermediate CA certificates are not installed on the client machines, the client machine won’t find a matching CA certificate unless one was sent by the SSL server. - Link Intermediate CA Certificate to Server Certificate – On a Citrix ADC, you install the Server Certificate under the Server Certificates Node. You also install the Intermediate CA certificate under the CA Certificates node. Then you right-click the Server Certificate and Link it to the Intermediate CA Certificate. During the SSL Handshake, the Citrix ADC transmits both the server certificate and the intermediate CA certificate. - IIS and Intermediate CA certificates – On IIS, you simply install the Intermediate Certificate in the web server computer’s Intermediate CA certificate store. IIS (Windows) automatically knows which intermediate CA certificate goes with the server certificate and it transmits both during the SSL Handshake phase. - The Root CA certificate must never be transmitted by the SSL Server during the SSL Handshake – you might be tempted to install the self-signed CA root certificate on the Citrix ADC and then link it to the intermediate CA certificate. Don’t do that. Only the Intermediate CA certificates, not the Root, can be transmitted from the ADC. The Root CA certificate is already installed on the client machine through an out-of-band operation (e.g. included with the operating system). If the Root CA certificate could be transmitted by the SSL Server, then the entire third party trust model is broken. - If the root certificate is transmitted during the the SSL Handshake phase, then SSL Labs will report this as “Chain issues: Contains anchor”. To create a certificate – The process to create a certificate is as follows: - Create keyfile – Use OpenSSL or the Citrix ADC GUI to create an RSA public/private key pair file. - Keypairs created by OpenSSL must be converted to RSA format. - ECDHE ciphers and DHE ciphers use RSA keypairs. - Create Certificate Signing Request (CSR) – Use OpenSSL or Citrix ADC GUI to create a Certificate Signing Request (CSR). The CSR contains the public key, and several other fields, including: server’s DNS name, and name of the Organization that owns the server. - Send CSR to CA – Send the CSR to a Certificate Authority (CA) to get a CA signature. Public CAs usually charge a fee for this service. - CA verifies ownership – The CA verifies that the Organization Name specified in the CSR actually owns the server’s DNS name. - One method is for the CA to email somebody at the organization that owns the web server. - More stringent verifications include background checks for the organization’s DUNS name. Higher verification usually requires a higher fee. - CA signs the certificate – If owner verification is successful, then the CA signs the certificate and sends it back to the administrator. The CA can use a chained intermediate CA Certificate to sign your Server certificate. - Complete certificate request – The administrator installs the signed server certificate and links it with the key file. In IIS, this is called “Complete Certificate Request”. In Citrix ADC, when installing a cert-key pair, you browse to both the signed certificate file, and the key file. - Create a TCP/SSL 443 listener and bind the certificate to it – Configure the web server to use the certificate. In IIS, add a https binding to the Default Web Site and select the certificate. In Citrix ADC, create a SSL Virtual Server, and bind the certificate key-pair to it. Certificate/key file storage on Citrix ADC– On Citrix ADC, certificate files and key files are stored in /nsconfig/ssl. Certificate File Format – There are several certificate file formats: Base64, DER, and PFX. - Base64 is the default encoding for certificate files on UNIX/Linux systems, including Citrix ADC. This format is also known as PEM. Base64 files look like text files with the first line that says: “—–BEGIN CERTIFICATE—–” - DER Format – On Windows, if your certificate doesn’t have a private key associated with it, then you can save the certificate to a file in DER format. DER format looks like a binary file, not a text file. - Newer versions of ADC can automatically detect that a certificate file is in DER format. - Older versions of ADC require you to indicate that the certificate file is DER format instead of Base64. - PFX format – On Windows, if you export a certificate with private key, then both are stored in a password-protected file with .pfx extension. This file format is also known as PKCS#12. - Newer versions of Citrix ADC can directly import .pfx files. - Citrix ADC can also use OpenSSL to convert a .pfx file into a Base64 PEM file that contains both the certificate and the RSA key. Private Keys should be encrypted – the key file that contains the Private Key should be encrypted, usually with a password. On Citrix ADC, when creating a key pair, you enable PEM Encoding and set it to 3DES (Triple DES) or AES 256 encryption. OpenSSL asks you to enter a permanent password to encrypt the private key. You will need this password whenever you install the certificate key pair. - When creating an RSA key file, specify PEM Encoding Algorithm and Passphrase – specifying a PEM Encoding Algorithm and Passphrase are optional. Password-protecting the .key file is recommended. - When converting a PFX file to PEM, encrypt the PEM private key – Specify a PEM encoding password when performing this conversion. - Hardware Security Module (HSM) – Hardware Security Modules (HSM) are physical devices that destroy their contents if there’s any attempt of physical compromise , which means they are the perfect place to store your private keys. The Citrix ADC FIPS appliances include a HSM module inside the FIPS appliance. Or, you can connect a Citrix ADC to a network-addressable HSM appliance. The HSM performs all private key operations so that the private key never leaves the HSM device. - Smart Card – Smart Cards require the user to enter a PIN to unlock the smart card to use the private key, similar to an HSM. Smart Cards are typically used with Client Certificates, which are detailed later. Subject field – One of the fields in the Server Certificate is called Subject. Look for the CN= section of this field. CN means Common Name, which might be familiar to LDAP (e.g. Active Directory) administrators. The Common Name is the DNS name of the web server. - Public CAs require FQDNs, not short DNS names – Public CAs require that the Common Name of your Subject field be a FQDN and not a short DNS name (left part of the DNS name before the first dot). Internal web servers are frequently accessed using short DNS names instead of FQDNs. Certificates for short DNS names can only be acquired from a private CA. Subject Alternative Names – Another related Certificate field is Subject Alternative Name (SAN). The Subject field (Common Name) only supports a single DNS name. Subject Alternative Name supports as many DNS names as desired. - Public CAs charge extra for each additional SAN Name. When you submit a CSR to a public CA, the public CA gives you an opportunity to enter more SAN Names. The more SAN Names you add, the higher the cost of the certificate. - CSRs ask for Common Name, not SAN names – When creating a CSR, put your server’s primary FQDN in the Common Name (Subject) field. When you submit the CSR to a Public CA, the Public CA will automatically copy your Common Name into the Subject Alternative Name field. - OpenSSL lets you include SAN Names in the CSR. However, most CAs ignore any SAN Names in the CSR. - Microsoft CA does not support SAN Names by default – Microsoft CA does not support SAN names by default. Microsoft CA will not copy the Common Name to the Subject Alternative Name. Search Google for instructions to enable Microsoft CA to accept manually entered SAN Names. URL Hostname Matching against Server Certificate’s Subject and Subject Alternative Name (SAN) fields - User enters URL Hostname in browser address bar – User opens a browser and enters https://hostname in the browser’s address bar to connect to a web server. The hostname is extracted from the entered URL and then matched with the Common Name field or Subject Alternative Name field of the downloaded sever certificate. If the entered hostname doesn’t match the certificate’s Common Name or Subject Alternative Name field, then a certificate error is displayed. To avoid certificate errors, your server certificates must have a Subject Common Name or Subject Alternative Name that matches the DNS Names that users will use to browse to your SSL web server. - Server Name Indication (SNI) – If the web server is hosting multiple SSL websites on one IP address, which server certificate should be sent to the client so the client can do its URL host name matching? In newer versions of browsers (Windows XP is too old) and web servers, Server Name Indication (SNI) was added to the SSL handshake. The SSL Client now sends the URL hostname from the browser’s address bar to the web server so the web server can select a certificate to send back to the SSL Client. - Chrome only accepts Subject Alternative Names, not Common Name – Chrome recently started ignoring the Subject and Common Name fields of the server certificate and now instead only does hostname matching against the Subject Alternative Name (SAN) field of the server certificate. - SAN Names let one certificate match multiple FQDNs – You might have an SSL Server listening on several FQDNs. Instead of buying a separate certificate for each FQDN, purchase one certificate with multiple SAN Names. - If your web server is reachable at both company.com and www.company.com, then add both FQDNs to the SAN Names field of the server certificate. Wildcard certificate – The Certificate Common Name can be a wildcard (e.g. *.company.com) instead of a single DNS name. This wildcard matches all FQDNs that end in .company.com. - The wildcard only matches one word and no periods to the left of .company.com. It will match www.company.com, but it will not match www.gslb.company.com, because there’s two words instead of one. It will also not match company.com because it requires one word where the * is located. - Wildcard certificates cost more from Public CAs than single name certificates and SAN certificates. - Wildcard certificates are less secure than single name certificates, because you typically use the same wildcard certificate on multiple web servers, and if the wildcard certificate on one of the servers is compromised, then all are compromised. - Public CAs will automatically copy the wildcard *.company.com into the Subject Alternative Name field. For Microsoft CA, you must manually specify *.company.com as a Subject Alternative Name, assuming you enabled Microsoft CA to accept Subject Alternative Names. Validity Dates – Another field in the certificate is Valid to (expiration date). If the date is expired, then the client’s browser will show a certificate error. When you purchase a certificate, it’s only valid between 90 days and 3 years, with CAs charging more for longer terms. - Expiration Warning – Citrix ADM (Application Delivery Management) can alert you when a Citrix ADC certificate is about to expire. Public CAs will also remind you to renew it. - Renew with existing keys? – When renewing the certificate, you can create a new SSL key pair, or you can use the existing key pair. If you create a new key pair, then you need to create a new CSR and submit it to the CA. If you intend to use the existing keys, then simply download the updated certificate from the CA after paying for the later expiration date. - Let’s Encrypt issues certificates with 90 days expiration – Let’s Encrypt is a free public CA that is fully automated. Certificates issued from Let’s Encrypt expire 90 days after issuance. Shorter expiration periods are more secure than longer expiration periods. Because of the short expiration periods, you must fully automate the Let’s Encrypt CSR generation, CA signature, and certificate installation process. Certificate Revocation – certificates can be revoked by a CA. The list of revoked certificates is stored at a CA-maintained URL. Inside each SSL certificate is a field called CRL Distribution Points, which contains the URL to the Certificate Revocation List (CRL). Client browsers will download the CRL to verify that the SSL server’s certificate has not been revoked. Revoking is usually necessary when the web server’s private key has been compromised. - CAs might revoke a certificate when Rekeying – if you rekey a certificate, the certificate with the former keys might be revoked. This can be problematic for wildcard certificates that are installed on multiple machines, since you only have a limited time to replace all of them. Pay attention to the CA’s order form to determine how long before the prior certificate is revoked. - Online Certificate Status Protocol – An alternative form of revocation checking is Online Certificate Status Protocol. The address for the OCSP server can be found in the certificate’s Authority Information Access field. A Cipher is a collection of algorithms that dictate how session keys are created and how the bulk encryption is performed. Cipher Suites are negotiated – During the SSL Handshake, the SSL Client and the SSL Server negotiate which cipher algorithms to use. A detailed explanation of ciphers would require advanced mathematics, but here are some talking points: - Recommended list of ciphers – Security professionals (e.g. OWASP, NIST) publish a list of the recommended, adequately secure, cipher suites. These ciphers are chosen because of their very low likelihood of being brute force decrypted within a reasonable amount of time. - Higher security means higher cost. For SSL, this means more CPU on both the web server (Citrix ADC), and SSL Client. You could go with high bit-size ciphers, but they require exponentially more hardware. - Each cipher suite is a combination of cipher technologies. There’s a cipher for key exchange. There’s a cipher for bulk encryption. And there’s a cipher for message authentication. So when somebody says “cipher”, they really mean a suite of ciphers. Ephemeral keys and Forward Secrecy – ECDHE ciphers are ephemeral, meaning that if you took a network trace, and if you had a the web server’s private key, it still isn’t possible to decrypt the captured traffic. This is also called Forward Secrecy. - DHE = Ephemeral Diffie Hellman, which provides Forward Secrecy. - EC = Elliptical Curve, which is a formula that allows smaller key sizes, and thus faster computation. Ciphers in priority order – The SSL server is configured with a list of supported cipher suites in priority order. The top cipher suite is preferred over lower cipher suites. - The highest common cipher between SSL Server and SSL Client is chosen – When the SSL Client starts an SSL connection to an SSL Server, the SSL Client transmits the list of cipher suites that the SSL Client supports. The SSL Server then chooses the highest matching cipher suite. If neither side supports any of the same cipher suites, then the SSL connection is rejected. Citrix-recommended cipher suites – See Scoring an A+ at SSLlabs.com with Citrix NetScaler – Q2 2018 update for the list of cipher suites that Citrix currently recommends. Every SSL Virtual Server created on the Citrix ADC should be configured with this list of cipher suites in the order listed. - GCM ciphers seem to be preferred over CBC ciphers. - EC (Elliptical Curve) ciphers seem to be preferred over non-EC ciphers. - Ephemeral ciphers seem to be preferred over non-Ephemeral ciphers. - This list does not include TLS 1.3 ciphers so you’ll need to manually add the TLS 1.3 ciphers to the cipher group. Client Certificate – Another type of certificate is Client Certificate. This is a certificate with private key, just like a Server Certificate. Client Certificates are installed on client machines, and usually are not portable (the private key can’t be exported). Client machines use the Client Certificate to authenticate the client to web servers. This client certificate authentication can simply verify the presence of the certificate on a corporate managed device. Or the user’s username can be extracted from the client certificate and used to authenticate the user. Client Private Key – The client certificate authentication process requires access to the paired client private key. If the paired client private key is not accessible, then the client certificate cannot be used for client authentication with the web server. The client private key can be installed on a particular machine (e.g. in the machine’s Trusted Platform Module), or the client private key can be portable (e.g. on a Smart Card). - Smart cards have a client certificate and private key installed on them. Users enter a PIN number to unlock the smart card to use the client certificate’s private key. Smart Cards eliminate needing to enter a password to authenticate with a web server. - Virtual Smart Cards – There are also virtual smart cards, which use hardware features of the client device to protect the client certificate. The client’s TPM (Trusted Platform Module) encrypts the private key. If the client certificate were moved to a different device, then the new device’s TPM wouldn’t be able to decrypt the private key. Windows Hello for Business and Passport for Work are examples of this technology. Device Certificates and User Certificates – some client certificates, called device certificates, are assigned to the machine to identify the compliance status of the machine. Other client certificates are assigned to the user so the user can use the user certificate to authenticate with web-based services. - Multi-factor authentication with user certificates – The user certificate can be the only authentication method (password-less), or the user certificate can be provided along with additional authentication material for multi-factor authentication. Password-less authentication – Client Certificates are used extensively in “password-less” authentication scenarios. For example, your client device can be joined to a cloud-hosted device management service, like Azure Active Directory or Intune. Once enrolled, the cloud device management service pushes down a client certificate that is used for future authentications to the cloud service. The cloud device management service also pushes down security policies that might require the user to enter a PIN number or biometric authentication method to unlock the client certificate. Once unlocked, the client certificate can be used to authenticate the machine and/or user to cloud-based services. Client Certificates for Machine Authentication – some companies want to restrict access to a web server from only company-managed machines. One method of ensuring that the device is company-managed is to push a client certificate (with private key) to the devices, and then require the device’s client certificate in addition to the user’s normal authentication credentials. Citrix Federated Authentication Service (FAS) uses user certificates for VDA authentication – In a SAML authentication scenario (described later), Citrix VDA does not have access to the user’s password. The only other option to authenticate with Windows is using a user certificate, also called a smart card certificate. The Citrix FAS server automatically generates user certificates for each user that authenticates to StoreFront. The private keys for these user certificates are stored on the FAS server. When the user connects to the VDA, the VDA retrieves the user’s certificate and private key from the FAS server and uses the user certificate to perform client certificate authentication (smart card authentication) with the Active Directory Domain Controllers. Citrix ADC Authentication Overview AAA is short for Authentication, authorization, and accounting. AAA is a generic term for performing authentication against remote authentication servers. Authentication Policy Bind Points – Citrix ADC supports several authentication mechanisms at three different bind points: - Citrix Gateway Virtual Server - AAA Virtual Server - AAA Virtual Server is only available in Citrix ADC Advanced Edition (formerly known as Enterprise Edition) and Premium Edition (formerly know as Platinum Edition). ADC Standard Edition does not include AAA Virtual Servers. - Global – for authentication for management access Load Balancing and AAA – Load Balancing Virtual Servers redirect users to a AAA Virtual Server to perform the authentication. After authentication, the user is redirected back to the Load Balancing Virtual Server. Citrix Gateway and AAA – Citrix Gateway can use a AAA Virtual Server for nFactor authentication. Or you can bind classic authentication policies directly to the Gateway Virtual Server. - AAA licensing – AAA Virtual Server is only available in Citrix ADC Advanced Edition (formerly known as Enterprise Edition) and Premium Edition (formerly know as Platinum Edition). ADC Standard Edition does not include AAA Virtual Servers. SSL/TLS – All ADC HTTP-based authentication methods require the client to be connected to the ADC using SSL/TLS protocol. If authentication collects credentials using an HTML Form, then the form submission is transferred to the ADC over the encrypted SSL/TLS connection. nFactor – nFactor allows a series of authentication web forms and authentication policies to be chained together for almost limitless customization of the authentication process. nFactor can be configured on any AAA Virtual Server, including AAA Virtual Servers used by Citrix Gateway and Load Balancing. - New ADC authentication features are nFactor only – Citrix seems to be adding new authentication features only to the nFactor platform. For example, StoreFrontAuth, Native One-time Password (OTP), and Gateway Self-Service Password Reset (SSPR) require nFactor. These features are not available on pure Citrix Gateway without a AAA Virtual Server. Be mindful of the licensing requirement for AAA. - Endpoint Analysis Scans in nFactor – nFactor also supports Endpoint Analysis and Device Certificate authentication. These were formerly Gateway-only features. With nFactor, they can be used with Load Balancing authentication too. Authentication Policy Syntax – On Citrix ADC, Authentication Policies can be created using Classic Syntax, or Default (Advanced) Syntax. - Classic Syntax authentication policies can be bound to all three bind points. - The expression ns_true is an example of Classic Syntax. - Default Syntax (Advanced Syntax) authentication policies cannot be bound to a Gateway Virtual Server. The only way for a Gateway to use Default Syntax authentication policies is through a AAA Virtual Server and nFactor authentication. - The expression true is an example of Default Syntax. - Classic Syntax deprecation – Citrix has announced that Classic Syntax authentication policies will be removed from future versions of Citrix ADC. It is not clear how this affects Citrix Gateway licensing since ADC Standard Edition does not include AAA or nFactor. NSIP is Source IP for authentication – By default, Citrix ADC uses its NSIP (management IP) as the Source IP when communicating with authentication servers. - To use SNIP, load balance – You can force authentication to use a SNIP by load balancing the authentication servers, even if there’s only one authentication server. In this case, the NSIP connects to a VIP, which uses a SNIP to connect to the authentication server. In a network trace, you’ll see both the NSIP and the SNIP. Back-end Single Sign-on – Once a user has been authenticated by Citrix ADC, ADC can usually perform Single Sign-on to the back-end resource (web page, Citrix StoreFront, etc.). - Different authentication methods for client and server – The authentication method that ADC uses to authenticate the user on the client side doesn’t have to match the authentication method that ADC uses for back-end Single Sign-on. For example, Citrix ADC can use LDAP+RADIUS on the client side, and convert it to Kerberos on the server side. AAA Groups – some authentication methods support extraction of user group membership from the authentication server. If you run cat /tmp/aaad.debug during a user authentication, it should show you any groups that it extracted for the user. On ADC, you can add matching groups in three different places: System (global), AAA, and Gateway. The groups you add on ADC must have names that match exactly (case sensitive) with the group names that were extracted from the authentication server. - System Groups provide authorization to the ADC management console GUI. You bind Command Policies to the System Groups. Command Policies are collections of Regular Expressions that match permitted ADC CLI commands. For details, see LDAP Authentication for Management. - AAA Groups (not Gateway) provide authorization to back-end web servers and allow different Single Sign-on configurations for different AAA Groups. - Gateway AAA Groups provide different Gateway Policies for different AAA Groups. You can bind the following to each AAA Group: Gateway Session Policies, VPN Authorization Policies, VPN IP Pools, Traffic Policies for SSON to back-end servers, Intranet Applications for VPN Split Tunnel, etc. For details, see SSL VPN: AAA Groups. Default Authentication Group – Many of the ADC Authentication Policies/Servers have a field called Default Authentication Group. If the user successfully authenticated with this authentication server, then the user is added to the Default Authentication Group. If you create a System Group, AAA Group, or Gateway AAA Group with the same case sensitive name that you specified in the Default Authentication Group field of authentication server, then you can bind policies that only apply to users that authenticated with a particular authentication server. Citrix ADC Authentication Methods Summary This section is just a summary of some of the available authentications on Citrix ADC. See later sections for detailed explanations of the most commonly used authentication methods. Citrix StoreFront authentication methods are limited – Citrix StoreFront has native support for Domain Authentication, Kerberos (aka Domain Pass-though), and SAML. StoreFront’s SAML capabilities are limited. If you need any authentication beyond what StoreFront supports, then you need to offload StoreFront authentication to Citrix ADC and Citrix Gateway. RADIUS is a glaring omission from the list of supported StoreFront authentication protocols. - Single Sign-on (SSON) from Citrix Gateway to StoreFront – after the Citrix Gateway authenticates the user, the ADC can SSON to StoreFront so StoreFront doesn’t have to ask for authentication again. - Citrix Gateway Callback – for password-less SSON from ADC to StoreFront, StoreFront can initiate a back-channel connection from StoreFront to ADC to verify that the the ADC actually authenticated the user. This is called the Gateway Callback URL. - SmartAccess – The Gateway Callback URL is also used by StoreFront to get more information about the authentication context, which can later be used in Citrix Virtual Apps and Desktops (CVAD) SmartAccess configurations. LDAP for Active Directory authentication – Citrix ADC supports LDAP protocol to authenticate with Domain Controllers. - LDAP vs Kerberos – LDAP is just one of the authentication protocols supported by Active Directory. Another common authentication mechanism is Kerberos. Kerberos and LDAP are completely different technologies. - HTML logon page – Users typically enter LDAP credentials in a HTML Form logon page generated by the Citrix ADC. - LDAP Protocol – Citrix ADC then uses LDAP protocol to transmit the entered credentials to a Domain Controller for verification. Ideally, LDAP Protocol should be encrypted using Domain Controller certificates. RADIUS for multi-factor authentication – Citrix ADC supports RADIUS protocol to authenticate to multi-factor authentication products. - No Native SecurID support – Citrix ADC does not have native support for RSA SecurID so RADIUS must be used instead. - RADIUS is supported by almost every multi-factor authentication product – but RADIUS is usually not enabled on the product by default. - On-prem RADIUS servers – Cloud-hosted authentication products, like Duo and Azure MFA, require installation of an on-premises RADIUS server. Duo has a program called the Duo Proxy, which supports RADIUS. Azure MFA has a plug-in for Microsoft Network Policy Server, which is a RADIUS server. - RADIUS credentials can be collected by a HTML logon page – RADIUS credentials are typically a PIN number plus a passcode displayed on a smartphone. Citrix ADC uses the RADIUS protocol to transmit the entered credentials to an authentication server for verification. - RADIUS supports non-password authentication methods, like phone calls and text messaging. For these methods, the user does not enter anything in the RADIUS password field on the HTML logon page. Or, the Citrix ADC administrator can hide the RADIUS password field from the HTML logon page. Even if the field is hidden, Citrix ADC still contacts the RADIUS server to perform the password-less authentication. - RADIUS Challenge – RADIUS servers can send back a prompt (RADIUS Challenge) asking the user to provide more authentication information. Some send back an HTML form asking the user what kind of authentication to perform – phone call, SMS, phone notification, etc. Citrix ADC will happily display the RADIUS Challenge to the user. - HTML-based RADIUS Challenges might not work in Citrix Receiver or Citrix Workspace app. RADIUS is typically combined with LDAP. - A single HTML logon page can ask the user to enter both the AD password and the two-factor passcode at the same time. - Citrix ADC uses LDAP to authenticate the user into Active Directory. - Citrix ADC uses RADIUS to verify the two-factor passcodes. - Citrix Receiver and Citrix Workspace app require special configuration to support RADIUS as a second password field. Specifically, the LDAP password field and the RADIUS password field must be swapped. SAML offloads authentication from the website or ADC to a separate Identity Provider (IdP) – The web site (and ADC) the user is trying to access is called the Service Provider (SP), which redirects the user to an Identity Provider (IdP) to perform the offloaded authentication. The SP and IdP can be different organizations. - Identity Providers usually require multi-factor authentication. - The user’s password stays at the Identity Provider. The Service Provider never sees the user’s password. - Trust between the Identity Provider and the Service Provider is provided by certificates. The two entities share certificates with each other to verify each other’s identity using digital signatures. Kerberos is another method of authenticating with Active Directory. - Kerberos authentication and LDAP authentication are completely different authentication methods. - Kerberos tickets – Kerberos authentication uses tickets that are provided by a Domain Controller. The client machine must be able to communicate with a Domain Controller to get the tickets. This means Kerberos usually doesn’t work on the Internet, at least not without a VPN connection. - If Citrix ADC uses Kerberos to Single Sign-on to the back-end web servers, then Citrix ADC needs connectivity to internal DNS and internal Domain Controllers to get the tickets. OAUTH allows a user to delegate service access to a separate program. The program can then access a different Service (HTTP API) as the user but without the user being present. This is a form of delegation. - OAUTH is used extensively in cloud services. If you see Google, Twitter, Facebook, or Azure Active Directory asking you to authorize a program to access your Profile information, then that’s OAUTH. - Azure Active Directory is primarily an OAUTH directory. It also supports SAML. - Azure Active Directory does not support Kerberos or LDAP. - Consent Form – If a program needs to access a website on your behalf, then the program will redirect you to the website’s OAUTH Authorization Server to authorize the delegation by presenting you with a Consent Form. The Consent Form indicates the permissions that the program will have in your account. Once you approve the access, the program can then access your account directly without needing to ask you for permission again. - Access Token – The OAuth Authorization Server returns an Access Token to the program, which can then be used by the program to authenticate to HTTP-based APIs. The program automatically renews the Access Token periodically by asking to OAUTH Authorization Server to refresh the Access Token. - Revoke Access Token – A user can revoke a program’s Access Token so the program can no longer access the user’s account. - OpenID Connect is an authentication mechanism built on top of OAUTH where the goal is to get an ID Token instead of an Access Token. - The ID Token makes OAUTH operate more like SAML and can be used in place of SAML. - JWT instead of XML – The OpenID ID Token is a signed JSON Web Token (JWT), instead of an XML document that’s used in SAML. - Book on OAuth – For an excellent book on OAuth and OpenID Connect, see OAuth 2 in Action. StoreFrontAuth delegates Active Directory authentication to an internal Citrix StoreFront server. StoreFront servers are usually domain-joined so it’s easier for StoreFront to perform the domain authentication. With StoreFrontAuth, you no longer need to perform LDAP authentication on the Citrix ADC. However, since StoreFront doesn’t support RADIUS, you’ll still need the ADC to perform RADIUS authentication. - StoreFrontAuth requires the ADC appliance to be licensed for nFactor authentication. Citrix ADC Native OTP (one-time passwords) – instead of purchasing a multi-factor authentication product, Citrix ADC has native support for two-factor passcode authentication using smart phone apps like Google Authenticator. - ADC Native OTP requires Citrix ADC Advanced Edition (formerly know as Enterprise Edition) because it relies on nFactor. - ADC Native OTP stores client secrets in an Active Directory user attribute. - ADC Native OTP has security challenges with device registration. For example, access to the device registration web page only requires single-factor authentication. LDAP (Lightweight Directory Access Protocol) LDAP authentication process: - HTML Form to gather credentials – Citrix ADC prompts user to enter a username and password, typically from an ADC-generated HTML Form. - Connect to LDAP Server – Citrix ADC connects to the LDAP Server on TCP 389 or TCP 636, depending on if encryption is enabled or not. - Citrix ADC logs into LDAP using a Bind account. This LDAP Bind account is an Active Directory service account whose password never expires. The only permissions the Bind account needs is to be able to search the LDAP directory. Domain Users group usually has this permission and thus it does not need Domain Admins permission. - Citrix ADC sends an LDAP Query to the LDAP Server. The LDAP Query asks the LDAP Server to find the username that was entered in step 1. An LDAP Query is like a SQL Query. The LDAP Server finds the user somewhere in the directory tree and returns the user’s full Distinguished Name (DN), which is the full path to the user’s account in the directory. - Login as user’s DN – Citrix ADC reconnects to the LDAP Server but this time logs in as the user’s DN (from step 4), and the user’s password (from step 1). - Extract attributes – After authentication, Citrix ADC can be configured to extract attributes from the user’s LDAP account. A common configuration is to extract the user’s group membership. Another is to get the user’s userPrincipalName so it can be used during Single Sign-on to back-end web servers. Password expiration requires LDAP encryption – If the user’s password has expired, then Citrix ADC can prompt the user to change the password. However, the user’s password can only be changed if the LDAP connection is encrypted, which means certificates must be installed on the LDAP servers (Active Directory Domain Controllers). - Password expiration reminders – Citrix ADC generally does not inform the user how long before the user’s password expires. - Citrix Gateway 12.1 can show reminder of password expiration in the RfWebUI Portal Theme when connecting to the Citrix Gateway’s built-in portal using Clientless Access. - StoreFront through Citrix Gateway does not show a reminder of user’s password expiration. - If you configure the VPN Home Page to something other than the built-in portal, then the user will not see any ADC-provided password expiration reminders. - Domain Controller certificates – The easy way to install certificates on domain controllers is to build a Microsoft Certification Authority in Enterprise Mode. Once the CA server is online, the Domain Controllers will auto-generate their own domain controller certificates. LDAP Communication Protocols: - Clear text LDAP connects to the LDAP Server on TCP 389. - Two encrypted LDAP protocols – If the LDAP server has a certificate, then you can use one of two different encrypted protocols to connect to the LDAP Server. - Secure LDAP (LDAPS) – LDAPS is a different port number than LDAP, just like HTTPS is a different port number than HTTP. LDAPS is typically TCP 636 to the LDAP Server. - LDAPS is also called LDAP over SSL/TLS. - LDAP Start TLS – LDAP Start TLS starts as a clear text connection to the LDAP Server on TCP 389. Then both sides of the connection negotiate encryption parameters, and switch to encrypted communication on TCP 389. - Don’t confuse Start TLS with SSL/TLS. SSL/TLS (LDAPS) requires negotiated encryption at the start of the connection. Start TLS doesn’t start encryption until the clear text connection is established. To force LDAP to be encrypted, it’s better to use LDAPS instead of Start TLS. The LDAP Server configuration on Citrix ADC has some interesting fields: - Server Logon Name Attribute is the name of the LDAP Attribute that contains the user name that was entered by the user in the HTML Logon Form. This attribute is typically set to sAMAccountName, which is the short user name. For multi-domain scenarios, you can change it to userPrincipalName, which has values that look like email address. If it’s set to userPrincipalName, then users need to enter their userPrincipalName in the ADC HTML Logon Page. Karim Buzdar at samAccountName Vs userPrincipalName explains the difference between the two. - SSO Name Attribute is the name of the LDAP attribute that Citrix ADC extracts from LDAP and then uses as the username when performing Single Sign-on (SSON) to back-end web servers (e.g. Citrix StoreFront). For multiple Active Directory domains and SSON to Citrix StoreFront, this field can be set to userPrincipalName to simplify how domain names are transmitted to StoreFront. For more details, see LDAP Authentication: Multiple Domains – UPN Method. - Search Filter controls the LDAP Query that is sent during LDAP authentication. A common usage of the LDAP Search Filter is to only return users that are members of a specific AD Group. For example, you can limit management access to ADC to only members of an ADC Administrator group. See LDAP Authentication for Management Multiple Active Directory domains – Citrix ADC is based on UNIX, which means it does not understand Active Directory domains. Configuring ADC to handle multiple domains isn’t too difficult, but it’s more challenging to send the domain name when performing Single Sign-on to back-end web servers like Citrix StoreFront. For more details, see LDAP Authentication: Multiple Domains. RADIUS (Remote Authentication Dial-In User Service) RADIUS Client – RADIUS will not work unless you ask the RADIUS administrator to add Citrix ADC NSIP (or SNIP if load balancing) as a RADIUS Client. Ask the RADIUS server administrator to add the ADC NSIP and SNIP as RADIUS Clients. - Secret key – The RADIUS administrator then gives you the secret key that was configured for the RADIUS Client. You enter this secret key in Citrix ADC when configuring RADIUS authentication. RADIUS authentication process: - HTML Form to gather credentials – Citrix ADC prompts the user to enter username and passcode, typically from an ADC-generated HTML Form. - Send login request to RADIUS server – Citrix ADC sends a login request (Access-Request) to the RADIUS Server on UDP 1812. Since it’s UDP, there’s no acknowledgment from the Server. - Passcode encryption using shared secret – The user’s passcode in the RADIUS packet is encrypted using the RADIUS Client’s shared secret key that is configured on the Citrix ADC and RADIUS Server. The secret key entered on Citrix ADC must match the secret key configured on the RADIUS Server. Each RADIUS Client usually has a different secret key. - RADIUS Attributes – The RADIUS Client (Citrix ADC) adds RADIUS attributes to the packet to help the RADIUS Server identify how the user is connecting. These attributes include: time of day, client IP, etc. - RADIUS Server: - RADIUS Clients configured on RADIUS Server – The RADIUS Server first verifies that the RADIUS Client (Citrix ADC) is authorized to perform authentication. The NAS IP (Citrix ADC Source IP) of the RADIUS Access-Request packet is compared to the list of RADIUS Clients configured on the RADIUS Server. If there’s no match, RADIUS does not respond. - Shared secret – The RADIUS server finds the RADIUS Client and looks up the shared secret key. The secret key decrypts the passcode in the RADIUS Access-Request packet. - Verify RADIUS Attributes – RADIUS Server uses the RADIUS Attributes in the Access-Request packet to determine if authentication should be allowed or not. - Authenticate the user – RADIUS authenticates the user. Most RADIUS Server products have a local database of usernames and passwords. Some can authenticate with other authentication providers, like Active Directory. - Access-Accept and Attributes – RADIUS sends back an Access-Accept message. This response message can include RADIUS Attributes, like a user’s group membership. - RADIUS Challenge – RADIUS Servers can also send back an Access-Challenge, which asks the user for more information. Citrix ADC displays the RADIUS-provided Challenge message to the user, and sends back to the RADIUS Server whatever the user entered. - SMS authentication uses RADIUS Challenge. The RADIUS server might send a SMS passcode to a user’s phone using SMS. Then RADIUS Challenge prompts the user to enter the SMS passcode. - Extract RADIUS Attributes – Citrix ADC can be configured to extract the returned RADIUS Attributes and use them for authorization (e.g. AAA Groups). SAML (Security Assertion Markup Language) SAML uses HTTP Redirects to perform its authentication process. This means that HTTP Clients that don’t support Redirects (e.g. Citrix Receiver) won’t work with SAML. SAML SP – The resource (webpage) the user is trying to access is called the SAML SP (Service Provider). No passwords are stored or accessible here. SAML IdP– The authentication provider is called the SAML IdP (Identity Provider). This is where the usernames and passwords are stored and verified. SAML SP Authentication Process: - User tries to access a Citrix ADC VIP (Citrix Gateway or AAA) that is configured for SAML SP Authentication. - Citrix ADC creates a SAML Authentication Request and signs it using a certificate (with private key) on the ADC. - Citrix ADC sends to the user’s browser the SAML Authentication Request, and a HTTP Redirect (301) that tells the user’s browser to go to the SAML IdP’s authentication Sign-on URL (SSO URL). - The user’s browser redirects to the IdP’s Sign-on URL and gives it the SAML Authentication Request that was provided by the Citrix ADC. - The SAML IdP verifies that the SAML Authentication Request was signed by the Citrix ADC’s certificate. - The SAML IdP authenticates the user. This can be a web page that asks for multi-factor authentication. Or it can be Kerberos Single Sign-on. It can be pretty much anything. - The SAML IdP creates a SAML Assertion containing SAML Claims (Attributes). At least one of the attributes is Name ID, which usually matches the user’s email address. The SAML IdP can be configured to send additional attributes (e.g. group membership). - The SAML IdP signs the SAML Assertion using its IdP certificate (with private key). - The SAML IdP sends the SAML Asseration to the user’s browser and asks the user’s browser to Redirect (301) back to the SAML SP’s Assertion Consumer Service (ACS) URL, which is different from the original URL that the user requested in step 1. - The user’s browser redirects to the ACS URL and submits the SAML Assertion. - The SAML SP verifies that the SAML Token was signed by the SAML IdP’s certificate. - The SAML SP extracts the Name ID (email address) from the SAML Assertion. Note that the SAML SP does not have the user’s password; it only has the user’s email address. - The SAML SP sends back to the user’s browser a cookie that indicates that the user has now been authenticated. - The SAML SP sends back to the user’s browser a 301 Redirect, which redirects the browser to the original web page that the user was trying to access in step 1. - The user’s browser submits the cookie to the website. The website uses the cookie to recognize that the user has already been authenticated, and lets the user in. Multiple SAML SPs to one SAML IdP – The SAML IdP could support authentication requests from many different SAML SPs, so the SAML IdP needs some method of determining which SAML SP sent the SAML Authentication Request. One method is to have a unique Sign On URL for each SAML SP. Another method is to require the SAML SP to include an Issuer Name in the SAML Authentication Request. In either case, the SAML IdP looks up the SAML SP’s information to find the SAML SP’s certificate (without private key), and other SP-specific information. - SP Certificate – Citrix ADC uses a certificate to sign the SAML Authentication Request. This Citrix ADC certificate must be copied to the IdP. - IdP Certificate – The IdP uses a certificate to sign the SAML Assertion. The IdP certificate must be installed on the Citrix ADC. SAML Configuration is usually performed first on the IdP – at the IdP, you add a Relying Party or Service Provider. - Provide the following information for the Server Provider (ADC) - Service Provider Assertion Consumer Service (ACS) URL – for Citrix Gateway, the URL ends in /cgi/samlauth - Service Provider Entity ID – to identify the service provider – configure it identically on both the IdP and on the ADC - IdP attribute (e.g. email address) that you want to send back in the Name ID claim - Download the IdP’s certificate and import it to the ADC - Copy the IdP’s Sign On URL and configure it on the ADC Shadow Accounts – The SAML IdP sends the user’s email address to the SAML SP. The SAML SP has its own directory it uses to authorize users to SP resources. The email address provided by the SAML IdP is matched with a user account at the SAML SP’s directory. Even though the user’s password is never seen by the SAML SP, you still need to create user accounts for each user at the SAML SP. These SAML SP user accounts can have fake passwords. The SAML SP user accounts are sometimes called shadow accounts. The shadow accounts are assigned permissions to SP resources, like Citrix Virtual Apps and Desktops (CVAD) published applications. - Shadow Account userPrincipalNames – If the SAML SP relies on Active Directory for permissions, then the local AD shadow accounts must have a userPrincipalName that matches the email address claim that came from the SAML IdP. You might have to add custom DNS suffixes to your Active Directory forest to make the UPNs match. - Citrix Virtual Apps and Desktops (CVAD) is an example Service Provider that relies on Active Directory. SAML and lack of user password – Not having access to passwords limits the back-end Single Sign-on authentication options at the SAML SP. Without a password, you can’t authenticate to Active Directory using LDAP or NTLM. - Kerberos Constrained Delegation (KCD) and Citrix VDA – Citrix ADC supports Kerberos Constrained Delegation (KCD), which means ADC can request Kerberos tickets from a Domain Controller on behalf of another user. KCD does not need access to the user’s password. However, Citrix Virtual Apps and Desktops (CVAD) does not support Kerberos tickets for VDA authentication. - Smart Card Logon to Citrix VDA – Instead of Kerberos tickets, Citrix VDA supports authentication using user certificates (smart card certificates) generated by Citrix Federated Authentication Service (FAS). FAS generates certificates for local Active Directory shadow accounts whose UPN matches the email addresses provided by the SAML IdP. Kerberos authentication process (simplified): - User tries to access a web page that is configured with Negotiate authentication. - To authenticate to the web page, the user must provide a Kerberos Service ticket. The Kerberos Service ticket is requested from a Domain Controller. - The Kerberos Service ticket is limited to the specific Service that the user is trying to access. In Kerberos parlance, the resource the user is trying to access is called the Service Principal Name (SPN). User asks a Domain Controller to give it a ticket for the SPN. - Web Site SPNs are usually named something like HTTP/www.company.com. It looks like a URL, but actually it’s not. There’s only one slash, and there’s no colon. The text before the slash is the service type. The text after the slash is the DNS name of the server running the service that the user is trying to access. - If the user has not already been authenticated with a Domain Controller, then the Domain Controller will prompt the user for username and password. - The Domain Controller returns a Ticket Granting Ticket (TGT). - The user presents the TGT to a Domain Controller and asks for a Service Ticket for the Target SPN. The Domain Controller returns a Service Ticket. - The user presents the Service Ticket to the web page the user originally tried to access in step 1. The web page verifies the ticket and lets the user in. The Service and the Domain Controller do not communicate directly with each other. Instead, the Kerberos Client talks to both of them to get and exchange Tickets. Kerberos Delegation – The Kerberos Service Ticket only works with the Service listed in the Ticket. If that Service needs to talk to another Service on the user’s behalf, this is called Delegation. By default, Kerberos will not allow this Delegation. You can selectively enable Delegation by configuring Kerberos Constrained Delegation in Active Directory. - In Active Directory Users & Computers, edit the AD Computer Account for the First Service. On the Delegation tab, specify the Second Service. This allows the first Service to delegate user credentials to the Second Service. Delegation will not be allowed from the First Service to any other Service. Kerberos Impersonation – If Citrix ADC has the user’s password (maybe from LDAP authentication), then Citrix ADC can simply use those credentials to request a Kerberos Service Ticket for the user from a Domain Controller. This is called Kerberos Impersonation. Kerberos Constrained Delegation – If the Citrix ADC does not have the user’s password, then Citrix ADC uses its own AD service account to request a Kerberos Service Ticket for the back-end service. The service account then delegates the user’s account to the back-end service. In other words, this is Kerberos Constrained Delegation. On Citrix ADC , the service account is called a KCD Account. - The KCD Account is just a regular user account in AD. - Use setspn.exe to assign a Kerberos SPN to the user account. This action unlocks the Delegation tab in Active Directory Users & Computers. - Then use Active Directory Users & Computers to authorize Kerberos Delegation to back-end Services. Negotiate – Kerberos and NTLM – Web Servers are configured with an authentication protocol called Negotiate (SPNEGO). This means Web Servers will prefer that users login using Kerberos Tickets. If the client machine is not able to provide a Kerberos ticket (usually because the client machine can’t communicate with a Domain Controller), then the Web Sever will instead try to do NTLM authentication. - NTLM is a challenge-based authentication method. NTLM sends a challenge to the client, and the client uses the user’s Active Directory password to encrypt the challenge. The web server then verifies the encrypted challenge with a Domain Controller. - Negotiate on client-side with NTLM Web Server fallback – Citrix ADC appliances can use Negotiate authentication protocol on the client side (AAA or Citrix Gateway). Negotiate will prefer Kerberos tickets. If Kerberos tickets are not available, then Negotiate can use NTLM as a fallback mechanism. In the NTLM scenario, Citrix ADC can be configured to connect to a domain-joined web server for the NTLM challenge process. By using a separate web server for the NTLM challenge, there’s no need to join the Citrix ADC to the domain. HTTP URL format: e.g. https://www.corp.com:444/path/page.html?a=1&key=value - https:// = the scheme. Essentially, it’s the protocol the browser will use to access the web server. Either http (clear-text) or https (SSL/TLS). - www.corp.com = the hostname. It’s the DNS name that the browser will resolve to an IP address. The browser then connects to the IP address using the specified protocol. - :444 = port number. If not specified, then it defaults to port 80 or port 443, depending on the scheme. Specifying the port number lets you connect to a non-standard port number, but firewalls might not allow it. - /path/page.html = the path to the file that the Browser is requesting. - ?a=1&key=value = query parameters to the file. The query clause beings with a ? immediately following the file name. Multiple parameters (key=value pairs) are separated by &. Query parameters are a method for the HTTP Client to upload a small amount of data to the Web Server. There can be many query parameters, or just one parameter. - An alternative to query parameters is to put the uploaded data in the body of an HTTP POST request. URLs must be safe encoded (Percent encoded), meaning special characters are replaced by numeric codes (e.g. # is replaced by %23). See https://en.m.wikipedia.org/wiki/Percent-encoding. HTTP Methods – HTTP supports several HTTP methods, with the most common being GET, POST, PUT, and DELETE. There are also less common HTTP methods like OPTIONS and CONNECT. Web Browsers typically only use GET and POST in their HTTP Requests. The other HTTP Methods are used by REST APIs. - GET method retrieves (downloads) a file. - Parameters to the GET request are added to the end of the URL in the Query section. - HTTP Request Headers also affect how the file is retrieved. - POST method uploads data to a web server (web application). The POST method includes a path to a web server script file that will process the upload. The HTTP Body of a POST Method usually contains one of the following; - HTML Form field data – user fills out a form and clicks Submit. This causes the browser to send a POST Request with the Body containing each of the HTML Form field names and the values entered by the user. The format of the POST Body is similar to field1name=value&field2name=value. Web Servers extract the field names and their values and saves them as variables that the web server scripts can access. - JSON or XML file upload – these JSON or XML files are generated by scripts or programs. The web server receives the JSON or XML files and does something programmatic with them. - Raw file upload – typically a full file that needs to be saved somewhere on the web server. - HTTP-based REST APIs use all four of the common HTTP Methods for the following purposes: - GET method retrieves an object in JSON or XML format. The arguments for the retrieval of the object (or objects) are specified in a JSON or XML document included in the body of the HTTP Request. - POST method creates a new object. The POST method path specifies the name of the object that needs to be created. The arguments for the new object are specified in a JSON or XML document included in the body of the HTTP Request. - PUT method modifies an existing object. The PUT method path specifies which object needs to be modified. The arguments for the modified object are specified in a JSON or XML document included in the body of the HTTP Request. - DELETE method deletes an existing object. The DELETE method path specifies which object needs to be deleted. The arguments for the object deletion operation are specified in a JSON or XML document included in the body of the HTTP Request. HOST Header – web browsers insert a HTTP header named Host into the HTTP Request Packet. The value of this Host header is whatever hostname the user typed into the browser’s address bar. It’s the part of the URL after the scheme (http://), and before the port number (:81) or path (/). - Web Servers use the Host Header to serve multiple websites on one IP address – Each hosted website has a different Host Header value configured. The web server matches the Host header in the HTTP Request packet with the website’s configured Host Header value and serves content from the matching website. - In IIS, if you edit the port number bindings for a website, there’s a field to enter a host name. If you enter a host name value in this field, then the website can only be accessed if the same value is in the HTTP Request’s Host header. If you try to connect to the website by entering its IP address into your browser’s address bar, then you won’t see the website because the Host header won’t match. - Citrix ADC Load Balancing Monitors do not include the Host Header by default. If the Web Server requires the Host Header, then you must modify the Citrix ADC Monitor configuration to specify the Host header. HTTP Body vs HTML Body – HTTP Body and HTML Body are completely different. - An HTML file contains an HTML Head section and an HTML Body section. But these are just sections in one HTML file. - An HTTP Response Body contains an entire HTML file. Or an HTTP Body can contain non-HTML files or data. HTML is just one of the file types that an HTTP Body can transport. For a detailed explanation of the HTTP Protocol including the various HTTP headers, see the book named HTTP: The Definitive Guide. This book was published years ago but is still relevant today. WebSocket – WebSocket enables a client machine and a server machine to use HTTP to establish a long-lived, two-way (bidirectional) TCP communications channel. WebSocket is initiated from the client machine, so you don’t have to open any firewall ports. The client machine sends an HTTP Request to a WebSocket server asking the WebSocket server to upgrade the HTTP Connection to a WebSocket connection. After the connection upgrade, HTTP is no longer needed, and the two machines can transmit anything across the WebSocket connection. The bidirectional aspect of WebSocket allows a server to send packets to the client machine without having to wait for a request to come from the client. One of the main purposes of WebSocket is to bypass inbound firewall restrictions. - HTTP and WebSocket – Not all HTTP servers support WebSocket. WebSocket is not HTTP. HTTP is only used to setup the WebSocket connection. Once WebSocket is established, then there’s no more HTTP. WebSocket runs on the same port numbers as HTTP, usually SSL/TLS port 443. - WebSocket and Citrix Cloud Connector – Citrix Cloud Connector establishes a WebSocket connection with Azure Service Bus. Citrix Cloud then uses the Azure Service Bus connection to send commands to the on-premises Citrix Cloud Connector machines. Client-side Data Storage – Web Servers sometimes need to store small pieces of data in a user’s web browser. The user’s browser is then required to send the data back to the web server with every HTTP Request. Cookies facilitate this small data storage. Set-Cookie – Web Servers add a Set-Cookie header to the HTTP Response. This Response Header contains a list of Cookie Names and Cookie Values. Cookies are linked to domains – The Web Browser stores the Cookies in a place that is associated with the DNS name (host name) that was used to access the web site. The next time the user submits an HTTP Request to that DNS name, all Cookies associated with that host name are sent in the HTTP Request using the Cookie HTTP Request header. - Notice that the two headers have different names. HTTP Response has a Set-Cookie header, while HTTP Request has a Cookie header. Cookie security – Cookies from other domains (other DNS names, other web servers) are not sent. Cookies usually contain sensitive data (e.g. session IDs) and must not be sent to the wrong web server. Hackers will try to steal Cookies so they can impersonate the user. Cookie lifetimes are either Session Cookies, or Persistent Cookies. Session Cookies are stored in the browser’s memory and are deleted when the browser is closed. Persistent Cookies are stored on disk and available the next time the browser is launched. - Expiration date/time – Persistent Cookies sent from the Web Server come with an expiration date/time. This can be an absolute time, or a relative time. Citrix ADC Cookie for Load Balancing persistence – Citrix ADC can use a Cookie to maintain load balancing persistence. The name of the Cookie is configurable. The Cookie lifetime can be Session or Persistent. The Cookie’s value specifies the name of the Load Balancing Service (web server) that the user’s browser last accessed so the same Service can be used in the next connection. Web Server Sessions Web Server Sessions preserve user data for a period of time – When users log into a web site, or if the data entered by a user (e.g. shopping cart) needs to be preserved for a period of time, then a Web Server Session needs to be created for each user. Web Server Sessions are longer than TCP Connections – Web Server Sessions live much longer than a single TCP Connection, so TCP Connections cannot delineate a session boundary. Each HTTP Request is singular – There’s nothing built into HTTP to link one HTTP Request to another HTTP Request. Various fields in the HTTP Request can be used to simulate a Web Server Session, but technically, each HTTP Request is completely separate from other HTTP Requests, even if they are from the same user/client. Server-side Session data, and Client-side Session ID – Web Server Sessions have two components – server-side session data, and some kind of client-side indicator so the web server can link multiple HTTP Requests to the same server-side session. A Cookie stores the Session ID – On the client-side, a session identifier is usually stored in a Cookie. Every HTTP Request performed by the client includes the Cookie, so the web server can easily associate all of these HTTP Requests with a Server-side session. Server-side data storage – On the server-side, session data can be stored in several places: - Memory of one web server – this method is the easiest, but requires load balancing persistence - Multiple web servers accessing a shared memory cache (e.g. Redis) - Shared Database – each load balanced web server can pull session data from the database. This is typically slower than memory caches. Load Balancing Persistence and Web Server Sessions – some web servers store session data on the same web server the user initially connected to. If the user connects to a different web server, then the old session data can’t be retrieved, thus causing a new session. When load balancing multiple identical web servers, to ensure the user always connects to the same web server that was initially chosen by the user, configure Persistence on the load balancer. Persistence Methods – When the user first connects to a Load Balancing VIP, Citrix ADC uses its load balancing algorithm to select a web server. Citrix ADC then needs to store the chosen server’s identifier somewhere. Here are common storage methods: - Cookie – the chosen server’s identifier is saved on the client in a Cookie. The client includes the ADC persistence Cookie in the next HTTP request, which lets Citrix ADC send the next HTTP Request to the same web server as before. Pros/Cons: - No memory consumption on Citrix ADC - Cookie can expire when the user’s browser is closed - Each client gets a different Cookie, even if multiple clients are behind a common proxy. - However, not all web clients support Cookies. - Source IP – Citrix ADC records the client’s Source IP into its memory along with the web server it chose using its load balancing algorithm. Pros/Cons: - Uses Citrix ADC Memory - If multiple clients are behind a proxy (or common outgoing NAT), then all of these clients go to the same web server. That’s because all of them appear to be coming from one IP address. The same is true for all clients connecting through one Citrix Gateway. - Works with all web clients. - Rule-based persistence – use a Citrix ADC Policy Expression to extract a portion of the HTTP Request and use that for persistence. Ultimately, it works the same as Source IP, but it helps for proxy scenarios if the proxy includes the Real Client IP in one of the HTTP Request Headers (e.g. X-Forwarded-For). - Server Identifier – the HTTP Response from a web server instructs the web client to append a Server ID to every URL request. The Citrix ADC can match the Server ID in the URL with the web server. Citrix Endpoint Management (XenMobile) uses this method. Authentication and Cookie Security – If a web site requires authentication, it would be annoying if the user had to login again with every HTTP Request. Instead, most authenticated web sites return a Cookie, and that Cookie is used to authorize subsequent HTTP Requests from the same user/client/browser. - Web App Firewall (WAF) Cookie Protection – Since the Web Session Cookie essentially grants permission, security of this Cookie is paramount. Citrix ADC Web App Firewall has several protections for Cookies. Get Data from user – if the web site developer wants to collect any data from a user (e.g. Search term, account login, shopping cart item quantity, etc.), then the web developer creates HTML code that contains a <form> element. Form fields – Inside the <form> element are one or more form fields that let users enter data (e.g. Name, Quantity), or let users select an option (drop-down box). Submit button – The last field is usually a Submit button. Field names – Each of the fields in the form has a unique name. GET and POST – The data is then submitted to the web server using one of two methods: GET or POST. - With GET, each of the field names and field values is put in the Query String portion of the URL (e.g. ?field1=value1&field2=value2), which is after the path and file name. - With POST, the HTTP Request Method is set to POST, and the field names and field values are placed in the Body of the HTTP Request. - The POST method is typically more secure. Web Servers can log the entire GET Method, including query strings. But POST parameters in the body are never logged. - Citrix ADC Web App Firewall (WAF) can do this inspection before the form data reaches the web server. - WAF can also validate the form fields. For example, Citrix ADC WAF can ensure that only numeric characters can be entered in a zip code field. Web App Firewalls for HTML Forms – HTML Forms are the most sensitive feature in any web application since a hacker can use a form to upload malicious content to the web server. Web Developers must write their web server code in a secure manner. Use features like Citrix ADC Web App Firewall to provide additional protection. - WAF for JSON, XML – Other forms of submitting data to web servers, like JSON objects, XML documents, etc. should also be inspected. Citrix ADC Web App Firewall can do this too. JSON vs XML - JSON is smaller than XML. XML is marked up with human-readable tags, bloating the size. JSON contains data, curly braces, colons, quotes, and square brackets. That’s it. Very little of it is dedicated to markup so most of it is pure data. Commercial systems have a programmatically-accessible API (Application Programming Interface) that allows programs and scripters to control a remote system. Some API commands retrieve information from the system. Other API commands invoke actions (e.g. create object) in the system. Use HTTP to call API functions – Modern APIs can be activated using the HTTP Protocol. Create a specially-crafted HTTP Request and send it to an HTTP endpoint that is listening for API requests. SOAP Protocol – Older HTTP-based APIs operate by exchanging XML documents. This is called SOAP (Simple Object Access Protocol). However, XML documents are difficult to program, and the XML tags consume bandwidth. REST API – Another newer HTTP-based architecture is to use all of the HTTP Methods (GET, POST, PUT, DELETE), and exchange JSON documents. JSON is leaner than XML. REST is stateless. All information needed to invoke the API must be included in one HTTP Request. REST API is HTTP. - A REST-capable client is any client that can send HTTP Requests and process HTTP Responses. Some languages/clients have REST-specific functions. Others have only lower level functions for creating raw HTTP Requests. - On Linux, use curl to send HTTP Requests to an HTTP-based REST API. - In PowerShell, use Invoke-RestMethod to send an HTTP Request to an HTTP-based REST API. - Inside a browser, use Postman or other REST plug-in to craft HTTP Requests and send them to an HTTP-based REST API. To invoke an HTTP REST-based API: - HTTP Request to login – Send an HTTP Request with user credentials to the login URL or session creation URL detailed in the API documentation. - Session Cookie – The REST API server sends back a Session Cookie that can be used for authorization of subsequent REST/HTTP Requests. The REST Client saves the cookie and adds it to every subsequent REST/HTTP Request. - Create a REST API HTTP Request: - Read API Documentation – Use the API’s documentation to find the URLs and HTTP Methods and JSON Arguments to invoke the API. - Content-Type – Some REST API Requests require a specific Content-Type to be specified in the HTTP Request Header. Add it to the HTTP Request that you’re creating. - JSON Object in Request – Most REST API Requests require a JSON object to be submitted in the HTTP Body. Use the language’s functions to craft a JSON object that contains the parameters that need to be sent to the API Call. - URL Query String– Some REST API Requests require parameters to be specified in the query string of the URL. - Send HTTP Request – Send the full HTTP Request with HTTP Method, URL, URL Parameters, Content-Type Header, Cookie Header, and JSON Body. Send it to the HTTP REST server endpoint. - Process Response, including JSON – The REST API sends back a HTTP 200 success message with a JSON document. Or it sends back an error message, typically with error details in an attached JSON document. Citrix Gateway VPN Networking SNIP vs IP Pool (Intranet IPs) – By default, when a Citrix Gateway VPN Tunnel is established, a Citrix ADC SNIP is used as a shared Source IP for all traffic that leaves the Citrix ADC to the internal Servers. The internal Servers then simply reply to the SNIP. Instead of all VPN Clients sharing a single SNIP, you can configure IP Pools (aka Intranet IPs), where each VPN Client gets its own IP address. Use IP Pools if Servers initiate communication to clients – if servers initiate communication to VPN Clients (IP Phones, SCCM, etc.), then each VPN Client needs its own IP address. This won’t work if all VPN Clients are sharing a single SNIP. Intranet IPs assignment – Intranet IPs can be assigned to the Gateway vServer, which applies to all users connected to that Gateway vServer. Or you can apply a pool of IPs to a AAA Group, which allows you to assign different IP Pools (IP subnets) to different AAA Groups. IP Pools and Network Firewall – If different IP Pools for different AAA Groups, then a network firewall can control which destinations can be reached from each of those IP Pools. Intranet IP Subnet can be brand new – the IP subnets chosen for VPN Clients can be brand new IP Subnets that the Citrix ADC is not connected to. Citrix ADC is a router, so there’s no requirement that the IP addresses assigned to the VPN Clients be on one of the Citrix ADC’s data (VIP/SNIP) interfaces. Reply traffic from Servers to Intranet IPs – If the Intranet IP Pool is a new IP Subnet, then on the internal network (core router), create a static route with the IP Pool as destination, and a Citrix ADC SNIP as Next Hop. Any SNIP on the Citrix ADC can reach any VPN Client IP address. IP Spillover – if there are no more Intranet IPs available in the pool, then a VPN Client can be configured to do one of the following: use the SNIP, or transfer an existing session’s IP. This means that a single user can only have a single Intranet IP from a single client machine. Split Tunnel – by default, all traffic from the VPN Client is sent across the VPN Tunnel. For high security environments, this is usually what you want, so the datacenter security devices can inspect the client traffic. Alternatively, Split Tunnel lets you choose which traffic goes across the tunnel, while all other client traffic goes out the client’s normal network connection (directly to the Internet). Enable Split Tunnel in a Session Policy/Profile – the Session Policy/Profile with Split Tunnel enabled can be bound to the Gateway Virtual Server, which affects all VPN users, or it can be bound to a Gateway AAA Group. Intranet Applications define traffic that goes across the Tunnel – If Split Tunnel is enabled, then you must inform the VPN Client which traffic goes across the Tunnel, and which traffic stays local. Intranet Applications define the subnets and port numbers that go across the Tunnel. The Intranet Applications configuration is downloaded to the VPN Client when the Tunnel is established. - Intranet Applications – Route Summarization – If Split Tunnel is enabled, a typical configuration is to use a summarized address for the Intranet Applications. Ask your network team for a short network prefix that matches all internal IP addresses. For example, every private IP address (RFC 1918) can be summarized by three route prefixes. The summarized Intranet Applications can then be assigned to all VPN Clients. Most networking training guides explain route summarization in detail. - If your internal servers all have IP addresses that start with 10., then you can create a Intranet Application for 10.0.0.0 with netmask 255.0.0.0. This Intranet Application would send all traffic with Destination IP Addresses that start with 10. across the VPN tunnel. - Intranet Applications – Specific Destinations – Alternatively, you can define an Intranet Application for every single destination Server IP Address and Port. Then bind different “specific” Intranet Applications to different users (AAA Groups). Note: this option obviously requires more administrative effort. - For configuration details, see SSL VPN: Intranet Applications. Split DNS – If Split Tunnel is enabled, then Split DNS can be set to Remote, Local, or Both. Local means use the DNS servers configured on the Client. Remote means use the DNS Servers defined on the Citrix ADC. Both will check both sets of DNS Servers. There are three methods of controlling access to internal Servers across the VPN Tunnel – Authorization Policies, Network Firewall (usually with Intranet IPs), and Intranet Applications (Split Tunnel). - Authorization Policies control access no matter how the VPN Tunnel is established. These Policies use Citrix ADC Policy Expressions to select specific destinations and either Allow or Deny. In Citrix ADC 11.1 and older, Authorization Policies use Classic Policy Expressions only, which has a limited syntax. In Citrix ADC 12 and newer, Authorization Policies can use Default Syntax Policy Expressions, allowing matching of traffic based on a much broader range of conditions. - Intranet Applications – If Split Tunnel is enabled, then Intranet Applications can be used to limit which traffic goes across the Tunnel. If the Intranet Applications are “specific”, then they essentially perform the same role as Authorization Policies. If the Intranet Applications are “summarized”, then you typically combine them with Authorization Policies. - Network firewall (IP Pools) – If Intranet IPs (IP Pools) are defined, then a network firewall can control access to destinations from different VPN Client IPs. If Intranet IPs are not defined, then the firewall rules apply to the SNIP, which means every VPN Client has the same firewall rules. VPN Tunnel Summary In Summary, to send traffic across a VPN Tunnel to internal servers, the following must happen: - If Split Tunnel is enabled, then Intranet Applications identify traffic that goes over the VPN tunnel. Based on Destination IP/Port. - Authorization Policies define what traffic is allowed to exit the VPN Tunnel to the internal network. - Static Routes for internal subnets – to send traffic to a server, the Citrix ADC needs a route to the destination IP. For VPN, Citrix ADC is usually connected to both DMZ and Internal, with the default route (default gateway) on the DMZ. To reach remote internal subnets, you need static routes for internal destinations through an internal router. - Network Firewall must allow the traffic from the VPN Client IP – either SNIP, or Intranet IP (IP Pool). - Reply traffic – If the VPN Client is assigned an IP address (Intranet IPs aka IP Pool), then server reply traffic needs to route to a Citrix ADC SNIP. On the internal network, create a static route with IP Pool as destination, and a Citrix ADC SNIP as Next Hop. Network Boot and PXE Network Boot – when you configure a machine to boot from the network, the machine downloads its operating system from a TFTP server while the machine is still in BIOS boot mode. The operating system download happens every time the machine boots. The machine does not need any hard drives because everything it needs it gets from the network. - Bootstrap file – the first file downloaded from the TFTP server is called the Bootstrap, which is a small file that starts enough of the operating system so the machine can connect to the network and download the rest of the operating system files. - NICs and Network Boot – Almost every Network Card (NIC), including virtual machine NICs, has Network Boot capability. - A notable exception is Hyper-V Synthetic NICs; Hyper-V Legacy NIC can Network Boot, but Hyper-V Synthetic NICs cannot. - Configure BIOS to boot from network – to configure a machine to network boot, in the machine’s BIOS, there should be an option to Network Boot from a Network Card (NIC). Move that boot option to the top of the list. PXE (Pre-boot Execution Environment) – PXE is a mechanism for Network Boot machines to discover the location of the bootstrap file. PXE is based on the DHCP protocol. PXE works like this: - Get IP from DHCP – The network boot machine can’t download anything until it has an IP address, which it gets from DHCP. - Discover TFTP Server address – Then the machine uses DHCP to get the IP address of the TFTP Server and the name of the bootstrap file that should be downloaded from the TFTP Server. Network Boot without PXE – You can also Network Boot without PXE by booting from an ISO file or local hard drive that has just enough code on it to get the rest of the bootstrap from the TFTP server. DHCP is still usually used to get an IP address, but the IP address of the TFTP server is usually burned into this locally accessible code. PXE works as follows: - DHCP Request to get IP – The NIC performs a DHCP Request to get an IP address. - PXE Request to get TFTP info – The NIC performs a PXE Request (second DHCP Request) to get the TFTP IP address and file name. - Download from TFTP – The NIC downloads the bootstrap file from the TFTP server and runs it. - Run the bootstrap – The bootstrap file usually downloads additional files from a server machine (e.g. Citrix Provisioning Server) and runs them. TFTP Server – the TFTP Server/Service can be running from Citrix Provisioning Servers, or you can build some Linux machines and run TFTP from them. PXE and DHCP DHCP Scope Options for PXE – The TFTP Server Address and the Bootstrap file name are delivered using DHCP Scope Options. - The response for the initial DHCP Request for an IP Address can include the TFTP address and bootstrap file in DHCP Scope Options 66 and 67. - Or, if the response for the initial DHCP Request for an IP address does not include Options 66 and 67, then the network boot machine does another DHCP Request, this time on port UDP 4011. A PXE Server should be listening for this new port number and the PXE Server can respond with the TFTP Server address and the bootstrap file name. Two DHCP port numbers – In other words, the Network Boot machine needs to one DHCP Request, or two DHCP Requests. The first DHCP Request is always on UDP 67. While the second DHCP Request is on UDP 4011. Having two separate port numbers allows the two DHCP Servers to perform different functions. the UDP 67 DHCP server is responsible for handing out IP addresses, while the UDP 4011 DHCP Server is responsible for handing out TFTP address and bootstrap file names. - PXE Server = DHCP – A PXE Server running a PXE Service is nothing more than a DHCP server that listens on UDP 4011. Citrix Provisioning Servers have a PXE Service that can be optionally started if you are unable able add Options 66 and 67 to your primary DHCP servers. PXE Request does not cross routers – The second DHCP Request has the same limitations as the first DHCP Request in that neither DHCP request can cross a subnet boundary unless the local router is configured to listen for the UDP 4011 DHCP Request and forward it to a PXE Server. Most routers can be easily configured for UDP 67, but UDP 4011 forwarding is not a typical configuration. It might be easier to put PXE Servers on the same subnet as the Network Boot machines. Or add DHCP Scope Options 66 and 67 to the primary DHCP Servers. PXE Service Redundancy – If you use a PXE Service that is separate from your main DHCP Servers, then you want at least two PXE Servers that are reachable from your Network Boot clients. PXE Service redundancy works the same as DHCP Server redundancy except that you don’t have to replicate any DHCP databases. An easy way to provide redundancy for PXE is to put the PXE servers on the same subnet as the Network Boot clients. If TFTP is not reachable, then your Network Boot clients can’t boot. DHCP Scope Option 66 can only point to a single TFTP Server IP Address. If you try to add two TFTP Server addresses, then either the Network Boot clients won’t boot, or they’ll only use the first TFTP Server address. Here are some workarounds for this limitation. - Load Balance TFTP using Citrix ADC – Use Citrix ADC to load balance two or more TFTP Servers and configure DHCP Option 66 to point to the Citrix ADC Load Balancing VIP. - DNS Round-Robin – Option 66 can point to a DNS Round Robin-enabled DNS name, where the single DNS name resolves to both TFTP Servers’ IP addresses. This assumes the DHCP Client receives DNS Server information from the DHCP Server. - Separate Option 66 configured on each DHCP Server – If you have multiple DHCP Servers, each DHCP Server can send back a different Option 66 TFTP Server IP address. PXE instead of DHCP Scope Option 66 – Another High Availability workaround is to not use DHCP Scope Option 66 and instead install the PXE Service on two servers. The Network Boot Clients would need to be able to reach both PXE Servers. If Network Boot clients and PXE Services are on the same subnet, then each PXE Service can send back a different TFTP Server IP address. Either PXE Service can respond to the PXE Request, so if one PXE Server is down, then the other PXE Server will respond. - This option works best if PXE Service and TFTP Service are installed on the same server, like it usually is in Citrix Provisioning environments. Citrix Provisioning and TFTP Citrix Provisioning (CPV) is the new name for Citrix Provisioning Services. TFTP Server is usually installed and running on each Citrix Provisioning server. In addition to normal DHCP PXE options for Network Boot, Citrix Provisioning has several additional options for delivering the TFTP Server addresses to CPV Target Devices (Network Boot Clients). These options are: - PXE Service on CPV servers on same subnet as Target Devices - CPV Servers are connected to the same subnet as the Target Devices. PXE Service is installed and running on each CPV server. Either PXE Service can respond to PXE Requests from the Target Devices on the same subnet. - For large environments, use a /21 or smaller subnet mask to allow hundreds of Target Devices on one subnet. 21-bit subnet mask allows 2,048 machines on one subnet. - You can also install a DHCP Service (e.g. Microsoft DHCP) onto the Citrix Provisioning servers. Then you don’t need to configure the local router to forward DHCP requests to a different DHCP server. - Target Devices boot from CPV Boot ISO - The Boot ISO has the TFTP Server IP addresses (Citrix Provisioning server IP addresses) pre-configured. - Included with CPV installation is a Boot ISO Creator called Boot Device Management. - Note: The Boot ISO uses a different TFTP Service than normal PXE. Normal TFTP is UDP 69, while the Boot ISO connects a TFTP service called Two-stage Boot TFTP Service that is listening on UDP 6969. - Target Device Boot Partition - CPV can burn boot code into a hard disk attached to each Network Boot machine. - The boot partition boot process also uses the Two-stage Boot TFTP Service. Citrix ADC Global Server Load Balancing (GSLB) GSLB is DNS GSLB is DNS – Citrix ADC receives a DNS Query and returns an IP address in the DNS Response, which is exactly what a DNS Server does. Enabling Global Server Load Balancing (GSLB) on a Citrix ADC essentially turns your ADC into a DNS server. GSLB Purpose – The purpose of GSLB is to resolve DNS names that have multiple IP address responses. A web site can be hosted in multiple datacenters. Each data center has different IP addresses. To reach the web site in a particular data center, you use one IP address. To reach the web site in a different data center, you use a different IP address. Instead of creating a different DNS name for each IP address, it would be more convenient for the user if a single DNS name could resolve to both IP addresses. How GSLB chooses an IP address – GSLB has several methods of choosing the IP address that is given out in the DNS response. - IP Address Monitoring – Is the IP address reachable? If not, then don’t give out that IP address. - Active/Passive – Is the website only “active” at one IP address? If so, then give out that “active” IP address. - If the “active” IP address is down, then give out the “passive” IP address instead. This is automatic and there’s no need to manually change the DNS record. - Proximity – Which IP address is closest to the user? - Site persistence – Which IP address did the DNS Client get last time it submitted a DNS Query? - Load Balancing Method – which IP address is receiving less traffic? Active IP address Monitoring – GSLB will not give out an IP address unless that IP address is UP. GSLB has two options for determining if an IP address is reachable or not: - Ask ADC for VIP status – If the Active IP is an ADC Virtual Server IP, then GSLB can ask the ADC appliance for the status of the VIP. - Citrix ADC uses a proprietary protocol called Metric Exchange Protocol (MEP) to transmit GSLB-specific information, including VIP status, between ADC appliances. - Monitors – for all other cases, including ADC VIPs, GSLB can use a Monitor to periodically probe the Active IP address to make sure it’s reachable. For TCP services, a simple TCP three-way handshake connection might be sufficient. For UDP, some other monitoring mechanism is needed (e.g. ping). - Internet is up? – For public DNS, GSLB needs to determine if Internet clients can reach the Active IP address across the Internet. - Active IP remote from GSLB DNS – For Active IPs in a datacenter that is remote from the GSLB DNS listener appliance, configure the GSLB Monitoring Probe to route across the Internet, so the Monitoring Probe is essentially using the same path that an Internet client would use. Proximity Load Balancing Methods – GSLB supports two Proximity Methods: - Static Proximity – The Source IP of the DNS Query is looked up in a location database to determine where the DNS Query came from. The GSLB Active IPs are also looked up in the location database to determine the location of each Active IP. The Active IP address that is closest to the DNS Query’s Source IP is returned. - Citrix ADC has a built-in static proximity location database that you can use. Or you can import a database downloaded from a geolocation provider. These databases are CSV files containing IP ranges and coordinates. - Dynamic Proximity – each Citrix ADC appliance participating in GSLB pings the Source IP of the DNS Query to determine which ping is fastest. - If ping is blocked to the Source IP, then you can configure Static Proximity as a backup method. - DNS Query Source IP is the IP address of the client’s DNS Recursive Resolver server, and not the actual IP address of the client machine. If a client machine uses a DNS Server that is in a different city than the client machine, then the proximity results will be distorted. - EDNS Client Subnet (ECS) is intended to solve this problem. Newer versions of Citrix ADC support ECS. GSLB is only useful if a single DNS name can resolve to multiple IP addresses. The purpose of GSLB is for DNS names that have multiple IP address responses. GSLB selects one of those IP addresses and returns it in response to a DNS Query. If the DNS name can only ever respond with one IP address, or if you don’t want GSLB to choose a different IP address automatically, then it would be easier to just leave the single DNS name with single IP address on regular DNS servers. Limitations of regular DNS Servers: - The DNS Server doesn’t care if the IP address is reachable or not. There’s no monitoring. - The DNS Server doesn’t know which IP address is closest to the user. There’s no proximity load balancing. - The DNS Server can’t do site persistence, so you could get a different IP address every time you perform the DNS Query. - DNS records have a Time-to-live (TTL). If TTL is high, and if you manually change the DNS record to a different IP address, then it could take as long as the TTL for the change to be propagated to all DNS Recursive Resolver servers and DNS Clients. - GSLB DNS responses have a default TTL of 5 seconds. GSLB is not in the data path – once GSLB returns an IP address to the user, the user connects to the IP address and GSLB is done until the user needs to resolve the DNS name again. Since GSLB is just DNS, Citrix ADC GSLB can return any IP address, even if that IP address is not owned by an ADC appliance. - Short GSLB TTL – GSLB DNS TTL defaults to 5 seconds, so the user might need to resolve the DNS name again in the very near future. HTTP requests are short-lived and thus might require frequent DNS queries. While TCP connections, like Citrix ICA connections, are long-lived and might only need a DNS Query at the beginning of their connection. Site Persistence – Once an Active IP address is selected for a user, you probably want the user to always connect to that Active IP address for a period of time. This is similar to Load Balancing Persistence needed for Web Sessions. GSLB has three methods of Site Persistence: - Source IP – GSLB records in memory the DNS Query’s Source IP and the Active IP that was given in response to a DNS Query. Each GSLB-enabled DNS name has its own Source IP persistence table. Multiple ADC appliances participating in GSLB replicate the Source IP persistence tables with each other using the proprietary Metric Exchange Protocol (MEP). - Cookie Persistence – For Active IPs that are HTTP VIPs on a Citrix ADC appliance/pair connected by GSLB MEP, the first HTTP Response from the Citrix ADC VIP will include a cookie indicating which GSLB Site (data center) the HTTP Response came from. The HTTP Client will include this Site Cookie in the next HTTP Request to the HTTP VIP. After the DNS TTL expires, if the client’s DNS Query gets a different IP address response that is in a different GSLB Site, then the HTTP Request will be sent to the wrong VIP. Citrix ADC has two options for getting the HTTP Request from the wrong VIP to the correct VIP: - HTTP Redirect – redirect the user to a different site-specific DNS name (a Site Prefix is added to the original DNS name). This requires the SSL VIP’s certificate to match the original GSLB-enabled DNS name, plus the new site-specific DNS name. - Proxy – proxy the HTTP Request to the HTTP VIP in the correct GSLB Site. This means that Citrix ADC in the wrong GSLB Site must be able to forward the HTTP Request to the HTTP VIP in the correct GSLB Site. Multiple ADC GSLB appliances – for redundancy, you will want to enable GSLB on at least two pairs of ADC appliances, usually in different datacenters. The location of the ADC GSLB appliances is unrelated to the IP addresses that are given out in DNS Responses to DNS Queries. - Metric Exchange Protocol – the multiple ADC GSLB appliances communicate with each other using Citrix’s proprietary Metric Exchange Protocol (MEP). MEP transmits GSLB-specific information, including: Dynamic Proximity latency results, Site persistence, IP Address Traffic Load, and VIP Status (for monitoring). - Identical GSLB Configuration on all appliances – DNS Queries are delegated to multiple ADC GSLB appliances. It’s not possible to control which Citrix ADC appliance/pair resolves the DNS Query. Thus all Citrix ADC appliance/pairs that resolve GSLB DNS names must have identical GSLB configuration, so the DNS responses are the same no matter which Citrix ADC appliance/pair resolves the DNS name. DNS listener – Each ADC GSLB appliance needs at least one DNS listener. For public DNS (Internet DNS), the DNS listener IP address must be reachable from the Internet. For internal private DNS, then DNS listener IP address should be an internal IP address. ADC has several methods of listening for DNS requests: - ADNS Service – ADC listens for ADNS queries on an ADC SNIP. ADNS Services can only resolve DNS names that are created locally on the ADC appliance. The ADNS service cannot ask other DNS servers to resolve DNS names. You can create more than one ADNS service listener on a single ADC appliance. - DNS Load Balancing (DNS Proxy) – a Load Balancing Virtual Server VIP with DNS as the protocol. The load balancing services point to your normal DNS servers. When the Load Balancing VIP receives a DNS Query, it will first look in its GSLB configuration for a match. If there’s no match, then ADC will forward the DNS Query to your DNS servers so they can resolve it. This configuration is called DNS Proxy. You can create multiple DNS Load Balancing VIPs on a single appliance Both Public DNS and Internal Private DNS on one GSLB appliance? – ADC GSLB only has one DNS database. If you create DNS listeners for both internal private DNS and public DNS on the same appliance, then be aware that there is no real separation between the two types of DNS Queries. A more secure approach is to have separate ADC GSLB appliances for public DNS vs internal DNS. - DNS Views – in some cases you can configure DNS Views to provide different IP address responses depending on whether the DNS Client is a public machine or an internal machine. You can also use DNS Policies to block DNS Requests from Internet machines for specific internal private DNS names. Delegate GSLB-enabled DNS names to Citrix ADC GSLB DNS listeners. Delegation is configured by creating NS (Name Server) records in the existing DNS zone. NS records are a way of telling a Recursive Resolver “I don’t have the answer, but these other DNS servers do have the answer”. There are a few methods of doing this delegation: - In the existing DNS zone, delegate specific DNS names to Citrix ADC GSLB DNS Listeners. - For example, you can delegate gateway.company.com by creating two NS records for gateway.company.com and setting them to the IP addresses of the GSLB DNS Listeners. - Each DNS name needs a separate delegation (separate set of NS records). - In the existing DNS zone, delegate a sub-zone (e.g. gslb.company.com) to Citrix ADC DNS. Then create CNAMEs for each GSLB-enabled DNS name that are alias’d to an equivalent DNS name in the sub-zone. For example: - Create two NS records for gslb.company.com and set them to the IP addresses of the GSLB DNS Listeners. - Then create a CNAME for gateway.company.com alias’d to gateway.gslb.company.com. Since the gslb.company.com sub-zone is delegated to Citrix ADC GSLB DNS Listeners, Citrix ADC will resolve this DNS name. - Create CNAMES for any additional GSLB-enabled DNS names. - Move the entire existing DNS zone to Citrix ADC . - Note: Citrix ADC was never designed as a full-fledged DNS Service, so you might find limitations when choosing this option. Other GSLB Use Cases GSLB and Multiple Internet Circuits – a common use case for GSLB is if you have multiple Internet circuits connected to a single datacenter and each Internet circuit has a different public IP subnet. In this scenario, you have one DNS name, and multiple public IP addresses, which is exactly the scenario that GSLB is designed for. - Local Internet Circuit Monitoring – GSLB Services need a monitor that can determine if the local Internet is up or not. You don’t want to give out a Public IP on a particular Internet circuit if the local Internet circuit is down. You typically configure the GSLB Monitor to ping a circuit-specific IP address (e.g. ISP router). Internal GSLB – GSLB can also respond to internal DNS Queries by giving out Internal Private IPs. However, there are a couple differences for Internal Private IP addresses when compared to Public IP addresses: - Internal Private IPs are not in the location database – If GSLB is configured for static proximity load balancing, then you must manually add each internal subnet to the location database and specify geographical coordinates for each internal subnet. - Internal Private IPs are not affected by Internet outage – GSLB monitoring for internal Active IPs is usually configured differently than GSLB monitoring for public Active IPs. If the Internet goes down in a datacenter, then you want GSLB to stop giving out public IP addresses for that datacenter. However, if Internet goes down in a datacenter, then internal users are usually not affected. This means you need different monitoring configuration for Public IPs vs Internal Private IPs. See Global Server Load Balancing (GSLB) for information on how to configure GSLB on ADC appliances.
<urn:uuid:cc240232-be8d-4563-a034-cd262d935b38>
CC-MAIN-2022-40
https://www.carlstalhood.com/category/netscaler/citrix-adc-12-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00713.warc.gz
en
0.851748
27,213
2.546875
3
Picture sitting in your car, maneuvering through busy downtown traffic while talking on your phone or sending a few texts. This isn’t a scene of illegal texting or phone use. Why not? Because the car in this scenario is driving itself, leaving the passengers inside free to use their mobile phones. (In Google ‘s ideal scenario, you’d be talking on an Android phone.) This is Google’s image of the future. The company known best for its search engine announced this past weekend that its engineers are working on developing technology for cars that can drive themselves. Autonomous cars may be a bit far afield from Google’s normal work in search, browsers , operating systems and maps, but the company is looking to head down a new road. “Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard,” wrote Sebastian Thrun, a distinguished software engineer at Google, in a Saturday blog post . “They’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe. All in all, our self-driving cars have logged over 140,000 miles. We think this is a first in robotics research.” Thrun also noted that they use video cameras, radar sensors and a laser range finder to virtually “see” other cars and the basic traffic flow. The company that also introduced Google Maps and Google Earth , also used mapping technology to navigate the road ahead. And they took advantage of Google’s massive data centers to hold and process all of this information. While the autonomous cars were on the road, they were occupied by a “trained safety driver,” as well as a software engineer who could monitor the vehicle’s software operations. “We’ve always been optimistic about technology’s ability to advance society, which is why we have pushed so hard to improve the capabilities of self-driving cars beyond where they are today,” Thrun wrote. “While this project is very much in the experimental stage, it provides a glimpse of what transportation might look like in the future thanks to advanced computer science. And that future is very exciting.” So why would Google, a company whose name is a verb for Internet searching, set its sights on autonomous vehicles? One reason is because they can, said Ray Valdes, an analyst with research firm Gartner. “The long answer is that likely there are multiple reasons,” Valdes said. “This may have been an offshoot of the Street View mapping in Google Maps, and that took on a life of its own. Probably the project was not killed because it is cool, had support of senior management, and there is some potential reward further down the road, so to speak.” Much as it happened after Google got into the mobile phone market and the operating system arena, industry watchers are wondering if Google is losing focus on what makes the company money. Self-driving cars? How does that fit into Google’s overall strategy? “Although the car project does raise the issue of loss of focus for Google, at the same time, Google’s search business has become mature, which means that the company needs to cast a wide net to look for new sizable market opportunities,” Valdes said. “The field of autonomous transportation could, in 10 to 15 years time, be larger than the search engine business. It’s an extremely long shot for Google, but the investment is modest, it leverages existing core initiatives — Google Maps — and does support the “geeky” aspect of their brand today.” Rob Enderle, an analyst with the Enderle Group, isn’t as optimistic. “Google is a company that can’t seem to spell the word “focus,” which means activities like this are likely to be more distracting than successful,” he said. “They enter a field where companies like VW and Intel have been active for some time, and penetrating the automotive market can be even more daunting than penetrating the enterprise market.”
<urn:uuid:f0d2acb5-13e5-4e6e-a306-caeee64d68e3>
CC-MAIN-2022-40
https://channeldailynews.com/news/google-is-developing-self-driving-cars/11912
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00113.warc.gz
en
0.967613
890
2.703125
3
Trust is integral in any relationship, whether personal or professional. Organizations would not be able to move forward without trust. When new changes are implemented, businesses require constant cooperation for evaluation. Trust holds an enterprise together throughout implementation processes and ensures everyone is on the same page regardless changes made within an organization. From dealing with complex challenges to everyday problems within a business, trust in leadership, fellow employees, and to some extent even trust in the purpose of the company is what keeps an organization running. And this is only on the business end of things. Trust plays an integral role in the relationship between a business and consumer. Consumers need to be able to trust in the business in terms of quality, efficiency, and punctuality of deliverables, consistency of product and services, and security with personal data. Without the key component of trust, things would fall apart. Trust in the Digital Age Before the digital age dawned in most aspects of business, trust was simpler to build and repair. In an article in the Harvard Business Review, Rachel Botsman states “The characteristics of “institutional trust” were that it was big, hierarchical, centralized, gated, and standardized.” This worked for companies like Goldman Sachs and AT&T, but doesn’t translate into a working trust relationship for the network of market based companies like Airbnb and Etsy. Botsman elaborates, “The DNA of “peer trust” is built on opposite characteristics – micro, bottom-up, decentralized, flowing and personal.” As things become more digital, the way trust is built, lost, and repaired is being turned upside down. For example, ideas that would have once been considered potentially risky like renting your home out to a stranger based on reviews by other strangers, or getting a ride with a stranger, are now the basis on which booming business models are built. It’s interesting—more and more people trust these digital systems, but choose to be more critical when it comes to other digital facets like social media. Could it possibly be because these systems serve a more tangible purpose, in that they save users money, or provide an alternate experience? Social media on the other hand is not often associated with a business purpose, but rather as a manipulated tool to boost publicity that often strategically omits business details. That being said, society has increasingly put a lot of trust in all digital systems, giving them an almost godlike power to control our daily lives. Stories of cyber security breaches and bucket loads of data being wiped out are manifold; yet, people continue to trust in digital systems like apps, websites, and online company platforms with important and often private data. Popular apps and sites ask for permissions and information that we would be wary to hand over to a stranger, yet without hesitation, we hit “accept.” According to Patricia L. Hadre, University of Oklahoma, “This blanket trust occurs because many people are ill-equipped to judge the trustworthiness of specific technologies. This inability to discriminate quality in technologies, coupled with the systemic social wave of digitization, leads many people to treat digital tools and systems as a generic whole.” It becomes the responsibility of the business to build and keep that trust, in every way possible. Digital trust is built through a combination of security and ethics at each and every stage of any digital journey. One without the other simply won’t do anymore. As companies move to digital platforms, revenue channels are driven by chains of data, integration, and infrastructure. These are aspects the consumer never really sees. Rather, consumers implicitly trust the information they provide will be protected. If this security is breached, it won’t matter where it happens. No matter how legitimate the hows, whens, and whys are, the result will be a consumer whose trust has been irreparably broken — this is the only thing that matters. In today’s world, businesses can lose customers in just a few clicks. While one dissatisfied customer can take business elsewhere, the far-reaching powers of social media means that they can take hundreds of customers along with them. Take the example of the data breach within Target in 2013 where about 40 million customers’ credit and debit card information was stolen. Customers took to social media saying Target had “failed” them, and were extremely angry at the company’s inefficiency in dealing with the aftermath of this data theft. A violation of trust thus becomes an issue that can affect the company in significant ways, from revenues to even costing executives their jobs. Although this shift in the trust dynamic will be messy, it is vital for businesses to recognize the importance of building trust in their digital systems. On the upside, the move from traditional institutional trust to a more individual focused trust can be beneficial to both start-ups and established businesses. Amazon is one such established brand that has successfully taken advantage of this shift through the launch of Flex, a crowdsourced delivery service, in Seattle. Even though new complexities in terms of risk, security, and accountability are bound to occur, businesses in the digital age will have to find a way to handle this shift and build trust in their systems. Why? Because ultimately, trust is absolutely necessary for any relationship — business or otherwise — to succeed. This article was first published on calliduscloudcx.com. Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.
<urn:uuid:07451aa5-8870-4bd7-aa80-23ed5e379949>
CC-MAIN-2022-40
https://convergetechmedia.com/building-trust-digital-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00113.warc.gz
en
0.958381
1,289
2.796875
3
I am writing a series of posts describing Information Security Risk, from concepts to analysis and management. This is the first, what is Information Security Risk itself. Defining risk is a source of much debate from semantic to philosophical. What is clear is that risk refers to our uncertainty about what will happen in the future. Uncertainty can be slippery to pin down, but it often is defined by our doubt about our knowledge about future events and their consequences. We can be more or less confident about our predictions describing how future events will progress, but we can never be entirely certain without seeing those future events in advance. The Oxford English Dictionary (OED) defines uncertainty as “The quality of being uncertain in respect of duration, continuance, occurrence, etc.; liability to chance or accident. Also, the quality of being indeterminate as to magnitude or value; the amount of variation in a numerical result that is consistent with observation.” My preferred formal definition of risk comes from the International Standard for Risk Management, ISO 31000:2018, which describes risk as “the effect of uncertainty on objectives”. This definition makes risk practical as it ties the concept of risk to our objectives as well as ensuring it focuses on our doubt. It is important to understand why we are interested in risk, as a measure of our uncertainty about the future. Understanding risk is important because it informs decisions we make about prioritisation, resource allocation and whether or not to take action. Without these decisions to inform, then risk analysis and management are a pointless waste of resources. If decisions are going to be made on ‘gut-feel’ and implicit mental models, then there is no need to analyse risk. Because of our various cognitive biases, a well-analysed risk assessment is likely to conflict with our gut feel, and we need to understand how we will handle that when it happens so that we don’t waste resources filling out the paperwork but never actually influencing better outcomes for the risk owners. This definition in ISO 31000 is also adopted by the ISO 27000 for Information Security Management Systems (ISMS) vocabulary. ISO 27000 states explicitly that information security risk is the “effect of uncertainty on information security objectives” which are commonly held to be the confidentiality, integrity and availability of information and may also include authenticity, accountability, non-repudiation and reliability. ISO 27000 states explicitly that information security risk is the “effect of uncertainty on information security objectives” which are commonly held to be the confidentiality, integrity and availability of information (CIA) and may also include authenticity, accountability, non-repudiation and reliability. While I am unconvinced the information security objectives are themselves the maintenance of the CIA triad as separate from the goals of the risk owner, the identification of objectives is a key activity. I tend to believe the goals of the risk owner (or the more nebulous ‘business’) are more likely to be; avoiding or preventing security events, minimising the consequences of security events and balancing the friction that security controls introduce into generating value with the strength of the controls in protecting value. However, it is the job of the security leader and security risk analyst to engage with the risk owner and discover what their goals are. These definitions of risk make no distinction between positive or negative effects and make it clear that effects could be both positive and negative deviations. There is a common debate about whether that risk can have both a downside and an upside as is often calculated in attempts to estimate the effect on business objectives such as Value at Risk (VaR). The Oxford English Dictionary comes down clearly on the side of risk as a possible negative outcome when it defines risk as “Exposure to the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility.” Or as “Exposure to the possibility of harm or damage causing financial loss, against which property or an individual may be insured. Also: the possibility of financial loss or failure as a quantifiable factor in evaluating the potential profit in a commercial enterprise or investment.“ For Information Security, we are focused on the minimisation of compromises of confidentiality, integrity and availability (CIA) which lead to negative consequences in line with Doug Hubbard’s view that risks can only have a downside. Almost all risk practitioners subscribe to this view, and commonly, the upside of uncertainty is opportunity. As a result, defining Information Security Risk as ‘the negative effect of uncertainty on information security objectives‘ suits us well. This definition is entirely compatible with the definitions of Information Security Risk in a variety of standards, including: The US Government Standard NIST 800-30 Guide for Conducting Risk Assessments which states: “Information security risks are those risks that arise from the loss of confidentiality, integrity, or availability of information or information systems and reflect the potential adverse impacts to organisational operations (i.e., mission, functions, image, or reputation), organisational assets, individuals, other organisations, and the Nation.” The OpenFAIR Risk Taxonomy which states: “Risk estimates the probable frequency and magnitude of future loss“. The Carnegie Mellon University OCTAVE risk assessment methodology which states: “A risk is the possibility of suffering harm or loss. Risk refers to a situation where a person could do something undesirable or a natural occurrence could cause an undesirable outcome, resulting in a negative impact or consequence.” The UK Government and NATO standard CRAMM v5.1 states “Risk is the function of two separate components, the likelihood that an unwanted incident will occur and the impact that could result from the incident“. The Cyber Security Body of Knowledge (CyBok) Risk Management and Governance Knowledge Area relies on Ortwin Renn’s definition that: “risk is the possibility that human actions or events lead to consequences that have an impact on what humans value” but also includes the following formal definition: “The probable frequency and probable magnitude of future loss.“ There is a lot more to these industry definitions, but I’ll come to that later as I start decomposing information security risks to see what characteristics and attributes they have.
<urn:uuid:2dad0529-c6ea-40ff-bc6b-b225c9807c54>
CC-MAIN-2022-40
https://blog.blackswansecurity.com/2020/01/what-is-information-security-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00113.warc.gz
en
0.93334
1,289
2.546875
3
distribution point. Field within a digital certificate containing information that describes how to retrieve the CRL for the certificate. The most common CDPs are HTTP and LDAP URLs. A CDP may also contain other types of URLs or an LDAP directory specification. Each CDP contains one URL or directory certificates —Electronic documents that bind a user’s or device’s name to its public key. Certificates are commonly used to validate a digital signature. revocation list. Electronic document that contains a list of revoked certificates. The CRL is created and digitally signed by the CA that originally issued the certificates. The CRL contains dates for when the certificate was issued and when it expires. A new CRL is issued when the current CRL expires. CA —certification authority. Service responsible for managing certificate requests and issuing certificates to participating IPSec network devices. This service provides centralized key management for the participating devices and is explicitly trusted by the receiver to validate identities and to create digital certificates. certificate --Certificate presented by a peer, which contains the peer’s public key and is signed by the trustpoint CA. PKI —public key infrastructure. System that manages encryption keys and identity information for components of a network that participate in secured communications. authority. Server that acts as a proxy for the CA so that CA functions can continue when the CA is offline. Although the RA is often part of the CA server, the RA could also be an additional application, requiring an additional device to run it. RSA keys —Public key cryptographic system developed by Ron Rivest, Adi Shamir, and Leonard Adleman. An RSA key pair (a public and a private key) is required before you can obtain a certificate for your router.
<urn:uuid:bfbf76d1-0bd7-42c9-bab0-5bb3a0532be9>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_conn_pki/configuration/xe-16-11/sec-pki-xe-16-11-book/sec-pki-overview.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00113.warc.gz
en
0.899677
419
2.65625
3
The “Stuxnet” computer worm made international headlines in July, when security experts discovered that it was designed to exploit a previously unknown security hole in Microsoft Windows computers to steal industrial secrets and potentially disrupt operations of critical information networks. But new information about the worm shows that it leverages at least three other previously unknown security holes in Windows PCs, including a vulnerability that Redmond fixed in a software patch released today. As first reported on July 15 by KrebsOnSecurity.com, Stuxnet uses a vulnerability in the way Windows handles shortcut files to spread to new systems. Experts say the worm was designed from the bottom up to attack so-called Supervisory Control and Data Acquisition (SCADA) systems, or those used to manage complex industrial networks, such as systems at power plants and chemical manufacturing facilities. The worm was originally thought to spread mainly through the use of removable drives, such as USB sticks. But roughly two weeks after news of Stuxnet first surfaced, researchers at Moscow-based Kaspersky Lab discovered that the Stuxnet worm also could spread using an unknown security flaw in the way Windows shares printer resources. Microsoft fixed this vulnerability today, with the release of MS10-061, which is rated critical for Windows XP systems and assigned a lesser “important” threat rating for Windows Vista and Windows 7 computers. In a blog post today, Microsoft group manager Jerry Bryant said Stuxnet targeted two other previously unknown security vulnerabilities in Windows, including another one reported by Kaspersky. Microsoft has yet to address either of these two vulnerabilities – known as “privilege escalation” flaws because they let attackers elevate their user rights on computers where regular user accounts are blocked from making important system modifications. Anti-virus researchers also discovered that Stuxnet leverages a Windows vulnerability that Microsoft patched back in 2008. Roel Schouwenberg, a senior anti-virus researcher at Kaspersky, said initially it wasn’t clear why the worm’s designers included such an antiquated vulnerability, which would almost certainly set off alarm bells inside of any organization using common intrusion detection and prevention tools. But Schouwenberg said the inclusion of that 2008 vulnerability made more sense when he learned that most industrial control system networks do not employ these defensive tools or even basic network logging, as is common in most corporate networks. Consequently, he said, Stuxnet behaves differently depending on what type of network it thinks it is running on. Stuxnet performs some rudimentary checking to see whether it is on a corporate network or a control systems network: If it detects that it is running on a corporate network, it won’t invoke the older 2008 vulnerability, Schouwenberg said. The Kaspersky analyst said that whoever is responsible for writing the Stuxnet worm appears to be quite familiar with the way that SCADA systems are configured. Stuxnet, which targeted specific SCADA systems manufactured by Siemens, also disguised two critical files by signing them with the legitimate digital signatures belonging to industrial giants Realtek Semiconductor Corp. and JMicron. “If you look at the way they must have organized the entire attack, it’s very impressive,” Schouwenberg said. “These guys are absolutely top of the line in terms of sophistication.” News of just how successful this stealthy malware family has been in compromising SCADA systems is still trickling out. Earlier today, IDG News’s Robert McMillan quoted Siemens as saying the worm had infected SCADA systems in at least 14 plants in operation, although Siemens said the infections did not impair production at those plants or cause any malfunction. Stuxnet has infected systems in the U.K., North America and Korea, however the largest number of infections, by far, have been in Iran, IDG reports. But Joe Weiss, managing partner at Cupertino, Calif. based Applied Control Systems, said far too many people have been fixated on Stuxnet’s impact on Microsoft Windows systems and are missing the fact that its authors are using the worm as a means to an end. For example, researchers at Symantec found that Stuxnet uses default passwords built into Siemens systems to gain access to and reprogram the SCADA systems’ “programmable logic controllers” — mini-computers that can be programmed from a Windows system. According to Symantec: Stuxnet has the ability to take advantage of the programming software to also upload its own code to the PLC in an industrial control system that is typically monitored by SCADA systems. In addition, Stuxnet then hides these code blocks, so when a programmer using an infected machine tries to view all of the code blocks on a PLC, they will not see the code injected by Stuxnet. Thus, Stuxnet isn’t just a rootkit that hides itself on Windows, but is the first publicly known rootkit that is able to hide injected code located on a PLC. “The Department of Homeland Security put out an advisory on Stuxnet on September 2nd, and the only two things it didn’t say anything about is how to find it or get rid of it at the PLC level,” Weiss said. “People are focusing on what they know and understand, which are the standard Microsoft vulnerabilities. But that’s not the scary part. The really scary thing is that right now we don’t even know which controllers are trusted and which ones aren’t trusted.” While the intended target of Stuxnet appears to be the manipulation of Siemens PLCs, Weiss said Stuxnet could have just as easily been designed to attack PLCs made by other SCADA manufacturers. These and other topics will be the center of discussion at the ACS Control System Cyber Security Conference next week in Rockville, Md. — although the event is closed to the media. “The mechanism [the Stuxnet worm] used to install the Siemens payload came at the very end, which means this isn’t a Siemens problem and that they could have substituted [General Electric], Rockwell or any other PLCs as the target system,” Weiss said. “At least one aspect of what Stuxnet does is to take control of the process and to be able to do…whatever the author or programmer wants it to do. That may be opening or closing a plant valve, turning a pump on or off, or speeding up a motor or slowing one down. This has potentially devastating consequences, and there needs to be a lot more attention focused on it.”
<urn:uuid:eb5140a1-85ae-4fda-8ddf-f9de9387b22e>
CC-MAIN-2022-40
https://krebsonsecurity.com/2010/09/stuxnet-worm-far-more-sophisticated-than-previously-thought/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+KrebsOnSecurity+%28Krebs+on+Security%29&utm_content=Google+Reader
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00113.warc.gz
en
0.951068
1,380
2.609375
3
Machine learning is revolutionizing the way technology is deployed in many different fields, but when it comes to machine learning in data storage systems, things have been a little less dramatic. Big data storage and storage tiering are two areas where the use of machine learning in storage systems shows promise, but it’s in the area of solid state drive (SSD) storage that machine learning may offer the biggest opportunities for improvement. To understand why, it’s necessary to take a quick look at how SSDs work. When flash NAND is made and sold to SSD makers, it comes pre-configured with various trim or register settings. When a controller orders a write to the NAND, the write is made at a certain voltage. Likewise, when a read is carried out, it will be told if a cell has a charge of a certain voltage. These settings are set by the NAND manufacturer and are not the concern of the controller maker. Now let’s assume that the voltage in question is preset at 7V. It turns out that when NAND is new, it’s not necessary to supply a voltage of 7V to a cell. Not only might a charge of 2V work just as well, but it will also allow the cell to work for longer before wearing out. As the cell ages through write cycles it will become necessary to increase that voltage to, say, 3.5V, and only much later in the cell’s lifecycle will the default 7V be necessary. And finally it may be possible to prolong the life of the cell by applying a voltage even greater than the default 7V. By changing the default read and write voltage of the NAND and various other flash register settings – either once or several times during the life of an SSD, it may be possible to increase the endurance of the storage device significantly. And in fact endurance is not the only factor that could be enhanced. Other settings could increase the performance of the SSD, and still others its data retention capabilities. Machine Learning In Storage Systems For Optimization Ultimately, choosing these settings is an optimization problem: endurance may be improved by altering settings without effecting anything else, for example, but in most case it can be improved only at the expense of performance or retention, or perhaps both. In practice the easiest trade-off is endurance versus retention. That’s because many manufacturers choose settings that offer retention that can be measured in months, but in a data center environment an SSD may only be required to retain data for a day or two when the power is off. By altering the settings to lower the retention time big gains can often be made in endurance. But here’s the problem. Conventional 2D NAND may have 30 – 50 settings, and there is a highly complex interaction between them. That means that changing one can have a large and unexpected effect on another, making it very hard to optimize the settings manually to achieve a particular desired outcome. And when it comes to 3D NAND – the vertically stacked arrays of cells that most NAND makers are switching to – there can be thousands of settings. That makes the optimization fiendishly complex and practically impossible – for humans, at least. And that’s where machine learning in storage systems comes in to the equation: humans may not be able to optimize the thousands of NAND settings in 3D NAND, but it’s the type of exercise that machine learning systems excel at. Machine Learning In Storage Systems At Early Stages The only company known to be doing this type of machine learning in storage systems at the moment is an Irish company called NVMdurance, which bills itself as an” automated flash memory optimization company.” Its machine learning technology allows it to take individual manufacturers flash NAND and automatically generate viable sets of flash register settings optimized for different operating requirements. But even when using machine learning in storage systems in this way, the process is far from quick, according to Pearse Coyle, the company’s CEO. “It takes us three months and a hundred pieces of the new flash to generate settings,” he says. To get an idea of the complexity involved, the company takes the 100 pieces of NAND hardware and subjects them to reads and writes, measuring the results. It then builds a software model of the hardware, and produces “hundreds of millions” of virtual devices, according to Coyle. The machine learning system then tests different parameters on these virtual devices, taking the most promising ones to test on real hardware. ‘Billions Of Permutations’ How many different parameters does the machine learning system test? “There are many billions of permutations, and we use thousands of CPUs in the cloud to do the testing,” says Coyle. “The search space is actually too big but we quickly see associations between parameters so we are able to reduce the number of dimensions.” Using machine learning, Coyle says the company’s technology is able to optimize the NAND’s register settings for endurance or performance or data retention, or even produce a dynamic set of settings for a two phase life: the first configuration optimized for performance, and then when performance starts to drop it as the NAND ages it can be optimized for long term storage. A further complication with the use of 3D NAND is that the quality of the storage media is poorer compared to 2D NAND, says Coyle. As a result, manufacturers specify that a complex form of error correction called low-density parity-check (LDPC) should be used with it. LDPC involves the use of tables called log-likelihood ratio (LLR) tables, and these are time consuming and hard to generate, and specific to a particular type of NAND with specific settings. Because of this they are supplied by the NAND manufacturer to SSD makers who want to use a particular type of NAND. LLR Tables Using Machine Learning In Storage Systems So here’s another problem: if the NAND settings are changes – perhaps to settings which are optimized for greater endurance – then the LLR tables are no longer valid. “This completely screws SSD makers who want to differentiate their offerings with different settings,” says Coyle. “We reckon that there must be 60 or so SSD makers who can’t go to market with 3D flash because they can’t use the supplied LLR tables.” But NVMdurance’s machine learning in storage systems technology can automatically generate LLR tables for any sets of 3D NAND register settings that it comes up with. At this point it’s worth asking why all of this is important in a business context. Is the ability to use machine learning in storage systems to optimize an SSD for better endurance (or anything else) really that important? Business Case For Machine Learning In Storage Systems The answer to that question is an unequivocal “yes,” according to Tom Coughlin, founder of data storage consulting firm Coughlin Associates. “Endurance is down significantly with 3D NAND, so there is a need for better endurance,” he says. “One way to get costs down is to get endurance back up. This technology may also help compensate for differences that show up in manufacturing, leading to higher fab yields and therefore lower costs. And Coyle says that higher endurance is particularly important in the growing embedded device market, where replacing storage devices is difficult. “You don’t want to have to throw a car away just because the storage chips are no good anymore.” He adds that for cloud providers offering solid state storage as a service these sorts of endurance gains mean that they can keep making money from their SSD assets for much longer – generating a far higher return on them. Coyle also points that for hyperscale users, this use of machine learning in storage systems can allow them to have software running in SSD controllers that divides the SSD life into various stages. It would monitor the SSD looking at how long it had been running, the number of cycles it had performed, the error rates and so on, and then when certain thresholds are reached it could start using new register settings. This could ensure that endurance is maximized, or it could be used to convert high performance SSDs into longer lasting but slower ones as they age. The final question that’s worth asking is how effective is this machine learning in storage systems technology in practice? What are the potential gains. “I have seen twenty fold increases in endurance with some trade-offs, but realistically five to seven times endurance gains are what is probably possible,” concludes Coughlin.
<urn:uuid:abc5cd39-7517-4397-82b4-e540a048ed98>
CC-MAIN-2022-40
https://www.infostor.com/disk-arrays/ssds-to-benefit-from-machine-learning-in-data-storage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00113.warc.gz
en
0.935786
1,806
2.84375
3
Automation, machine learning, artificial intelligence, neural networks — these are all highly transformative technologies that are making quite the impact in the modern world. Artificial intelligence and machine learning, for instance, will power the next generation of smart vehicles, allowing them to be driverless and controlled by safe, effective platforms. Of course, you’d be forgiven for thinking that’s the only way they’re making an impact in the transportation and automotive industries. Driverless vehicles are, after all, incredibly convenient and useful. Plus, it’s difficult to imagine AI and related technologies making a difference elsewhere. How will these tools play into travel, public transportation, navigation, wayfinding and much more? Curated travel experiences Want to go out of town, but don’t have a specific destination in mind? Why not leave the decision up to a chatbot or AI tool? It’s likely you already use a voice assistant such as Siri or Alexa to curate and handle various tasks in your daily life. Maybe it makes sense to trust an AI to choose your next dream vacation, too! Even if you don’t let the machine curate your trip, you have access to a vast amount of support. Through instant messaging apps like Facebook Messenger and Slack, you can get in touch with an automated bot that can answer queries, pull up flight and travel info, clock travel routes and much more. On platforms such as Priceline or Expedia, you can tap into an algorithm-based recommendation system, designed to personalize the online booking experience to your needs. Guess what powers such a system? Machine learning and big data technologies, of course. Just as many of us rely on voice assistants in our daily lives, so we’ll eventually embrace technologies of the future when it comes to travel and booking. The famous Cosmopolitan hotel in Las Vegas, for example, unveiled a custom-made AI concierge named Rose to help improve guest services and modernize customer experiences. Self-driving public transportation With the emergence of self-driving and driverless vehicles comes the opportunity for automated transportation. Imagine taxis or shuttles designed to operate on a schedule, with no human driver behind the wheel. The scenario may sound far-fetched now, but within the next few years, it will become a reality. Several countries, including China and Sweden, have rolled out self-driving shuttles, so there’s already a precedent. Various companies are also testing them here in the States, so it won’t be long until they’re available everywhere. AI and machine learning systems will tap into a remote network, even from the moving vehicles, to coordinate and analyze a variety of performance and situational data. They’ll know what to do when encountering passersby, other vehicles and even road hazards. Even air and rail travel may soon be automated, too. You may be surprised to hear Boeing is currently working on a pilotless passenger plane. AI and machine learning can also power and control modern robotics, which provides a multitude of opportunities — both for automating menial tasks and speeding up the average customer experience. It’s not a stretch to imagine bots taking the place of a concierge, doorman, baggage and claims checkers, luggage transport, bartenders and much more. Oakland International Airport, for instance, deployed something just like this to help guests find their way around the facility. A robot named Pepper greets passengers near Terminal 2, and displays an interactive property map on a proprietary screen nearby. Travelers can talk to the robot to find out information about directions, restroom locations, points of interest and much more. Vesper designed a robot named Gita that will automatically follow you on short jaunts. What makes it special is that you can stow various items, including luggage, leaving you hands-free. Gita also doubles as a scooter to help people travel locally much faster. It’s not too much of a stretch to envision a whole fleet of similar robots in airports to help travelers carry luggage or pass through security checkpoints. Automation, convenience and safety are key Whether you’re talking about modern artificial intelligence or machine learning, the goal is the same: to automate or update a process to be more efficient. In the case of driverless vehicles, the tech will even help roadways and passengers become much safer. A machine, computer, robot or application never gets exhausted, cranky or burned out. It will never take its eyes or attention away from the task at hand, either. Vehicles aren’t the only things becoming safer thanks to the technology, and as it continues to evolve, reach will grow. Th bright — very bright — and heavily automated.
<urn:uuid:ea55f3cd-1a72-4ee1-b54d-77d6caeb6027>
CC-MAIN-2022-40
https://bdtechtalks.com/2018/04/06/ai-transportation-traveling-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00113.warc.gz
en
0.931203
983
2.703125
3
The girls participating in the program have quickly learned coding and building Websites. “They code it all themselves,” says Angie Schiavoni, co-founder of CodeEd. “They are programming like a real coder would.” While the key component of the project-based curriculum is developing a Website, the work occurs collaboratively, and much of the learning is peer-based. Volunteers with technical skills head into schools in underserved communities, and CodeEd provides them with curriculum and course materials. CodeEd was launched in 2010 on the premise that more girls should be introduced to STEM (science, technology, engineering and mathematics) fields and understand that computer science does not have to be a frightening or overwhelming field. As the CodeEd Website points out, as of 2008, only 7 percent of middle-school girls considered computer engineering a desirable career. “Our goal is to spark an interest at an early age,” Schiavoni explains. “We make the classes unintimidating so that when the girls hear about similar opportunities in the future, they will want to take another class.” CodeEd’s curriculum comes from a social computing course that Schiavoni’s husband, Sep Kamvar, the other co-founder of Code-Ed, taught at Stanford. He now teaches in MIT’s Media Lab, but the curriculum and course materials used by CodeEd are free under a Creative Commons Attribution license. At some partner schools, the modular curriculum is introduced through a daylong program; at others, it’s presented once a week. It depends on how the educational partner wants the instruction incorporated. Initially started in New York City, CodeEd now has other teams of volunteers teaching coding in Boston and San Francisco. These volunteers, who use the computer infrastructure in place at the schools, use team teaching so that when one teacher is at the board, the other checks the students’ work. Girls in the fifth grade and higher have learned how to format tags, make new Web pages with Unix commands and create a basic navigation bar using HTML. Creativity and self-expression are important and help stave off the impression of coding as “scary, hard work,” according to Schiavoni. When the instruction is complete, students present their Webpages to peers and parents. Since its inception, between 75 and100 girls a year have learned to code through CodeEd. “It’s incredibly rewarding to watch these girls come into the classes with no idea what they’re doing and, by the end, be excited by what they’ve built,” Schiavoni says. A new partnership with Entelo, a social-based talent recruiting company, could expand the program further. For every new hire made by a customer, Entelo plans to donate a year’s worth of CodeEd education for one girl. “Our partnership with Entelo is going to make it much easier to train more girls,” Schiavoni says. “And I think it will spur other companies to have similar innovative ways to give back to the community.”
<urn:uuid:29624a0f-3359-4e50-b625-298b4c0448bf>
CC-MAIN-2022-40
https://www.baselinemag.com/news/nonprofit-teaches-girls-to-code-and-build-websites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00113.warc.gz
en
0.950674
703
3.09375
3
As long as there has been money to be made, there have been social engineering attacks. Why? Because all social engineering is trying to get people to believe something and have them act upon it. Social Engineering has been happening for a long time, well before email. Back in 1925, a French con artist used social engineering tactics to try and sell the Eiffel Tower. Even the infamous Nigerian Prince scam has its roots in 18th-century cons, while the modern-day version has also been executed via snail mail and fax. Now, of course, social engineering is most associated with phishing. Some of the most common attack forms are spearphishing and Business Email Compromise. These take various forms, but all are trying to get the end-user to do something they don't want. For example, many BEC attacks will see the hacker posing as an executive, asking for a favor. The favor varies, but it often has to do with purchasing a gift card. Gift cards are more complicated to trace than straight cash. They are hoping that the end-user will buy a gift card, since an executive is asking. In reality, the money will go straight into the attacker's wallet. And hackers have it easy now. Collecting people's information is easy—just a simple Google or LinkedIn search, and hackers will know a person's job, location and interests, allowing them to craft a convincing phishing email. When there is money to be made, bad actors will do everything they can to obtain it. With phishing, it's never been easier. Unless, that is, you have Avanan, which prevents malicious emails from reaching the inbox, so there's no way for hackers to try and convince them to do something. In this whitepaper from Check Point, learn the long history of social engineering, its troubling future, and how Avanan and Check Point will help keep your organization safe.
<urn:uuid:0ca8b25f-4293-4caf-814d-93756e21d4f5>
CC-MAIN-2022-40
https://www.avanan.com/blog/the-history-and-future-of-social-engineering
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00113.warc.gz
en
0.971417
395
2.9375
3
•Parkinson’s Foundation report that about 1 million people had the disease in 2020, with about 10 million affected people over the world. Despite this prevalence, scientists are still unsure why Parkinson’s disease affects some people and not others, and there is currently no cure. •This type of brain disorder typically affects people over the age of 60, and the symptoms worsen with time. Common symptoms include stiffness, difficulty walking, tremors, and trouble with balance and coordination. The disease can also affect the ability to speak and lead to mood changes, tiredness, and memory loss. •Experiments of some researchers found that the bone morphogenetic proteins 5 and 7 (BMP5/7) can have these effects in a mouse model of the disease. This research may be the first step toward developing a new treatment for Parkinson’s disease. What is Parkinson’s disease? A movement disorder of the nervous system that worsens over time. As nerve cells (neurons) in parts of the brain weaken or are damaged or die, people may begin to notice problems with movement, tremors, stiffness in the limbs or the trunk of the body, or impaired balance. As these symptoms become more obvious, people may have difficulty walking, talking, or completing other simple tasks. Not everyone with one or more of these symptoms has PD, as the symptoms appear in other diseases as well. No cure for PD exists today, but research is ongoing and medications or surgery can often provide substantial improvement with motor symptoms. Symptoms of the disease:- The four primary symptoms are:- •Tremor- Shaking often begins in a hand, although sometimes a foot or the jaw is affected first. The tremor associated with PD has a characteristic rhythmic back-and-forth motion that may involve the thumb and forefinger and appear as a “pill-rolling.” It is most obvious when the hand is at rest or when a person is under stress. •Rigidity- Muscle stiffness, or resistance to movement, affects most people with PD. The muscles remain constantly tense and contracted so that the person aches or feels stiff. •Bradykinesia - This slowing down of spontaneous and automatic movement is particularly frustrating because it may make simple tasks difficult. The person cannot rapidly perform routine movements. Activities once performed quickly and easily—such as washing or dressing—may take much longer. •Postural instability- Impaired balance and changes in posture can increase the risk of falls. Who gets Parkinson’s disease? & what causes the disease? The average age of onset is about 70 years, and the incidence rises significantly with advancing age. However, a small per cent of people with PD have “early-onset” disease that begins before the age of 50. People with one or more close relatives who have PD have an increased risk of developing the disease themselves. An estimated 15 to 25 per cent of people with PD have a known relative with the disease. Some cases of the disease can be traced to specific genetic mutations. The cause of PD is unknown, although some cases of PD are hereditary and can be traced to specific genetic mutations. Most cases are sporadic—that is, the disease does not typically run in families. It is thought that PD likely results from a combination of genetics and exposure to one or more unknown environmental factors that trigger the disease. Although many brain areas are affected, the most common symptoms result from the loss of neurons in an area near the base of the brain called the substantia ‘nigra’. Normally, the neurons in this area produce dopamine. Dopamine is the chemical messenger responsible for transmitting signals between the substantia ‘nigra’ and the next “relay station” of the brain, the corpus striatum, to produce smooth, purposeful movement. Loss of dopamine results in abnormal nerve firing patterns within the brain that cause impaired movement. People with Parkinson’s have lost 60 to 80 per cent or more of the dopamine-producing cells in the substantia nigra by the time symptoms appear. People with PD also lose the nerve endings that produce the neurotransmitter norepinephrine- the main chemical messenger to the part of the nervous system that controls many automatic functions of the body, such as pulse and blood pressure. The loss of norepinephrine might explain several of the non-motor features seen in PD, including fatigue and abnormalities of blood pressure regulation. How is Parkinson’s disease diagnosed? Parkinson’s disease is a slowly progressive disorder. It is not possible to predict what course the disease will take for a person. The average life expectancy of a person with PD is generally the same as for people who do not have the disease. Fortunately, there are many treatment options available for people with PD. However, in the late stages, PD may no longer respond to medications and can become associated with serious complications such as choking, pneumonia, and falls. There are currently no specific tests that diagnose PD. The diagnosis is based on: •Medical history and a neurological examination •Blood and laboratory tests, to rule out other disorders that may be causing the symptoms •Brain scans to rule out other disorders. However, computed tomography (CT) and magnetic resonance imaging (MRI) brain scans of people with PD usually appear normal. In rare cases, where people have an inherited form of PD, researchers can test for known gene mutations as a way of determining an individual’s risk of developing the disease. However, this genetic testing can have far-reaching implications and people should carefully consider whether they want to know the results of such tests. Treatment & Surgery Medications for Parkinson’s include: Levodopa or Carbidopa, the cornerstone of therapy for PD is the drug levodopa (also called L-dopa). Nerve cells can use levodopa to make dopamine and replenish the brain’s reduced supply. People cannot simply take dopamine pills because dopamine does not easily pass through the blood-brain barrier. (The blood-brain barrier is a protective lining of cells inside blood vessels that regulate the transport of oxygen, glucose, and other substances in the brain.) Usually, people are given levodopa combined with another substance called carbidopa. When added to levodopa, carbidopa prevents the conversion of levodopa into dopamine except for in the brain; this stops or diminishes the side effects due to dopamine in the bloodstream. Levodopa or carbidopa is often very successful at reducing or eliminating the tremors and other motor symptoms of PD during the early stages of the disease. The earliest types of surgery for PD involved selectively destroying specific parts of the brain that contribute to PD symptoms. Surgical techniques have been refined and can be very effective for the motor symptoms of PD. The most common lesion surgery is called pallidotomy. In this procedure, a surgeon selectively destroys a portion of the brain called the globus pallidus. Pallidotomy can improve symptoms of tremor, rigidity, and bradykinesia, possibly by interrupting the connections between the globus pallidus and the striatum or thalamus. Some studies have also found that pallidotomy can improve gait and balance and reduce the number of levodopa people require, thus reducing drug-induced dyskinesias. Another procedure, called thalamotomy, involves surgically destroying a part of the thalamus; this approach is useful primarily to reduce tremors. Deep brain stimulation, or DBS, uses an electrode surgically implanted into part of the brain, typically the subthalamic nucleus or the Globus pallidus. DBS does not stop PD from progressing, and some problems may gradually return. While the motor function benefits of DBS can be substantial, it usually does not help with speech problems, “freezing,” posture, balance, anxiety, depression, or dementia. DBS is generally appropriate for people with levodopa-responsive PD who have developed dyskinesias or other disabling “off” symptoms despite drug therapy. What research is being done? To ‘slow or stop’ disease progression The mission of the National Institute of Neurological Disorders and Stroke (NINDS) is to seek fundamental knowledge about the brain and nervous system and to use the knowledge to reduce the burden of neurological disease. NINDS is a component of the National Institutes of Health (NIH), the leading supporter of biomedical research in the world. NINDS conducts and supports three types of research: basic scientific discoveries in the lab, clinical developing and studying therapeutic approaches to Parkinson’s disease, and translational focused on tools and resources that speed the development of therapeutics into practice. The goals of NINDS research on Parkinson’s disease are to better understand and diagnose PD, develop new treatments, and ultimately, prevent PD.
<urn:uuid:2efa7a58-cc07-4985-a1c3-f7b0a3ae4636>
CC-MAIN-2022-40
https://industryoutreachmagazine.com/possibility-through-research-for-parkinsons-disease/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00113.warc.gz
en
0.942443
1,851
3.5625
4
Over the years, there has been a significant increase in the number of viral respiratory infections in humans. As a result of this, the costs to the community have increased considerably owing to the growing number of physician visits and hospitalizations. Among approximately 200 viral respiratory pathogens, the most important viral pathogens include influenza and respiratory syncytial viruses (RSVs). Antiviral drugs play an important role in the prevention and treatment of these disorders. Historically, antiviral therapeutic agents have been used as a complementary strategy along with immunization in the prevention of influenza in the elderly population. However, the widespread use of antivirals is restricted by their cost, especially when used for prophylaxis. There are a few classes of approved agents for the treatment of respiratory viral infections. These include nucleoside analogs, ion channel blockers, neuraminidase inhibitors, and fusion protein inhibitors. Some of the most popular antiviral drugs are Amantadine, Rimantadine, Zanamivir, Oseltamivir, and Ribavirin, among others. Furthermore, the development of antiviral drugs requires a detailed knowledge of the replication of individual viruses and their intimate association with normal host cell activity. The antiviral therapeutic area for respiratory infections is witnessing success owing to the drugs targeting the "non-self" genomes and patient population that are capable of tolerating side effects and are adaptable to inconvenient dosing. The market is mainly driven by increasing incidences of respiratory infections, technological advancements such as use of nanotechnology in virology and its potential for antiviral therapeutics, and growing demand for quality healthcare. However, a number of pharmaceutical companies are shifting focus to other therapeutic areas, primarily due to high R&D costs and declining R&D productivity in addition to uncertainties in the regulatory guidelines. Apart from these market drivers, there are several geographic drivers as well for the growth of the respiratory antivirals market. The respiratory antivirals market in Asia-Pacific is expected to become the growth center for the global respiratory antivirals market. The respiratory antivirals markets in Japan and India are also likely to show a significant growth during the forecast period of 2014 to 2019. The higher CAGRs of these markets can mainly be attributed to the rising patient population, increasing middle-class incomes, surge in the health insurance market, and increased awareness about quality healthcare. The top five players in the global respiratory antivirals market account for more than 40%–50% of the market. However, new entrants in the market are intensifying the competition and thereby threatening the market shares of the existing players. In order to maintain their shares, leading players are continuously updating their product pipelines and focusing on introducing new products into the market. This increasing competitiveness is expected to drive innovation in the market, thereby helping the industry to solve existing challenges and meet the needs of the market. Scope of the Report Respiratory tract infections (RTIs) are infections affecting the sinuses, throat, airways, or lungs. These infections are usually caused by viruses but can also be caused by bacteria. Currently, only a handful of antiviral drugs are available for the treatment of respiratory infections in humans. However, owing to better knowledge and understanding of the molecular replication machinery of viruses and availability of computational methods for modeling protein structure, along with the use of RNA interference (RNAi) technology for sequence-specific inhibition of viral nucleic acids, a more rational approach has been adopted for the search for new antiviral drugs. Global Respiratory Antivirals Market This research report categorizes the respiratory antivirals market into the following segments:
<urn:uuid:67c4ed5d-390e-4616-9539-8975e8f46814>
CC-MAIN-2022-40
https://www.marketsandmarkets.com/Market-Reports/respiratory-antivirals-market-27985101.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00113.warc.gz
en
0.939518
742
2.703125
3
Google DeepMind AI system has been used by the company to achieve significant reductions in power consumption in some of its data centers. Google’s advanced artificial intelligence subsidiary revealed that the savings translated into a 15 percent improvement in power usage effectiveness (PUE). Artificial intelligence for sustainability DeepMind co-founder Demis Hassabis told Bloomberg that DeepMind was put in control of parts of Google’s data centers to reduce power consumption by manipulating computer servers and equipment such as cooling systems using its deep learning algorithms. “It controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things,” Hassabis said. “They were pretty astounded.” Google used 4,402,836 MWh of electricity in 2014, a significant proportion of which was spent on running the company’s sprawling network of data centers that power everything from its ubiquitous search engine, to its Maps application, YouTube and, of course, the Google Cloud Platform. The DeepMind system managed to reduce power usage in select data centers by several percentage points, Hassabis said, “which is a huge saving in terms of cost but, also, great for the environment.” Further improvements on the system are underway, not only because it automatically improves as time goes on, but because the company may put additional sensors into its data centers to fill in the blanks in areas where the AI currently lacks information, Hassabis said. Artificial intelligence means more power The news is the latest in a string of high profile announcements regarding DeepMind, a London-based startup Google bought in 2014 for £352 million (~$600 million). DeepMind says it “combines the best techniques from deep learning, reinforcement learning and systems neuroscience to build powerful general-purpose learning algorithms.” The subsidiary is perhaps best known for having developed the AlphaGo algorithm that successfully beat Lee Sedol at Go, becoming the first program to beat a professional human Go player without handicaps on a full-sized 19×19 board. This announcement, which sent shockwaves through the AI community which believed the achievement was still years off, is of particular interest due to the extremely high number of potential moves in a game of Go. With 2.08168199382×10170 possible moves, a sheer ‘brute force’ approach of looking at every potential move and choosing the best one is virtually impossible. Instead, AlphaGo had to rely on what DeepMind describes as ‘intuition’, after analyzing every publicly available Go match and playing itself again and again until it understood what a good move was. Hassabis also noted a crucial difference between his algorithm and that of IBM’s Garry Kasparov-beating Deep Blue chess program. He told PC Games: “The difference between AlphaGo and Deep Blue is that Deep Blue was programmed directly with knowledge about chess, whereas AlphaGo was programmed with the ability to learn that knowledge, which is much more powerful in our opinion because in theory it could learn some new domain if you gave it different data - that’s what we’re currently exploring now. “The next stage is: can we now use that in other real-world domains and do these really cool things that will benefit the world? That’s the two-pronged attack, really.” DeepMind’s algorithm has also learned to play various classic video games, including Atari Breakout, with the program again not being told how to play, or being specifically designed for that use. Elsewhere at Google, DeepMind is thought to have been used to improve Search, image and voice recognition, and perhaps other areas of the company’s vast business. DeepMind Health, meanwhile, drew attention and criticism for its access to UK National Health Service patient data, which it hopes to scan for patterns and trends that could help predict kidney issues and degenerative eye conditions. DeepMind, which recently moved into Google’s new £1bn King’s Cross headquarters, is just one part of Google’s heavy investment in artificial intelligence, with the technology giant opening a new machine intelligence research group in Zurich last month. Improvements in AI and subsequent savings on data center costs could prove crucial for Google’s efforts to compete with Amazon Web Services and Microsoft Azure. Since putting Diane Greene in charge in November 2015, Google has made aggressive statements about its place in the industry and its hopes for high growth. Whether this pans out remains to be seen.
<urn:uuid:7cdb5f67-9acf-4dc5-9e33-37e3050c6754>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/google-uses-deepmind-ai-to-cut-data-center-pue-by-15/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00313.warc.gz
en
0.953367
949
2.65625
3
In this article we will cover the following topics: - Citizens lack awareness of cyber security despite growing number of attacks - What are the types of attacks during the COVID pandemic - Checklist of controls against such attacks In the COVID-19 era, hackers have become more vigilant in attacking the Data Centers and servers. The outbreak has given hackers a wider platform with the majority of the innocent audience, who is not aware of how their information might get stolen. Therefore, citizens should not only be concerned about their health but also increase their awareness of data security. A report from Unisys states that less than one-third of Americans are taking data security seriously, when data breaches have multiplied four times in the COVID era. Also, FTC COVID 19 report shows a total fraud loss of $5.09M since January 2020 to May 2020. This loss might have been calculated as per the cases that have been reported, while many users might not be aware that their security has been breached. Today’s cyber-attacks during this pandemic have proved that COVID-19 is not the only enemy at this hour which can disrupt the life of every individual. Senior Technology executives from CNBC have confirmed the increased risks of Cybersecurity due to increased work from home. Although experts have contradicted the statement with more solid reasoning, where the real risk is much higher than the one stated by CNBC executive. Gap Analysis testing as well as remediation guidance for your remote work cyber infrastructure. Protecting remote workers from cyber attacks Miriam Wugmeister, partner and co-chair of law firm Morrison & Foerster’s global privacy and data security group stated “We are hearing from many clients and law enforcement that the level of cyber-attacks, phishing attempts, and scams occurring in light of COVID-19 has grown dramatically”. Negligence Of Citizens Towards Data Security Cyber criminals have been using this situation as a potential gain. It is not just COVID-19, but earlier during the Ebola crisis too, when the world was busy saving people from a deadly disease, cyber criminals were finding ways to prove their power over the weak nerves of cyber users. This situation is very alarming, as cyber criminals are again taking advantage of the fear that has taken control over people’s minds. Below are a few examples which more than fifty percent of cyber users are not aware of: - Spam campaigns are being held by cyber criminals with the keywords COVID, COVID-19, or corona. These are not just any promotional emails but have the potential to take charge of your device with a single click. - Masked spyware, malware, and Trojans have been found in attractive Coronavirus maps and websites, providing information regarding the pandemic to users. This suspicious act by cyber criminals has the potential to corrupt the user system or spy on every activity being performed on that system. - Ransomware attacks have increased in multiple folds as compared to normal days, during COVID-19 pandemic. Due to emergencies in the healthcare industry, cyber criminals have been attacking the vulnerabilities in the systems through infected emails or compromised employee credentials. The United States Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert for the general public for the awareness of cyber-attacks. This alert has displayed how cyber criminals are attacking the common public, like - An SMS message from a government that asks for some payment amount as a promise to battle COVID-19. - An email message from an unknown source with a link or attachments regarding COVID-19 information. On clicking such attachments, the user accidentally downloads a malware (usually a trick bot is used in this pandemic for majority of attacks). Cyber Security Preparedness in COVID-19 A short delay in the response to an emergency is capable of causing potential damage to the victim- this is not just true to COVID-19 patients but also the victims of cyber-attacks during the COVID-19 pandemic. Although it is impossible to be 100% prepared for every potential risk thus, in this situation, awareness and investments would be the key tools for individual data security. Preparedness for fighting the COVID-19 Incidences is largely required at this time. Every individual and organization has to be prepared to fight the impacts of this pandemic. The alert issued by The United States Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has provided the information on a list of indicators of compromise (IOCs) for detection of cyber attacks Below checklist would help in keeping the guards up against the cyber-attacks: - Backup Files: It is always wise to take the backup of your data on reliable sources, like hard drives for a small quantity of data and Cloud backup for large data. - Safe Login: Ensure the website authenticity while logging in with your credentials on it. A large number of fake websites have been floated on the web to trap the customer and obtain their credentials for misuse. - Spam Protection: Using secure email gateways is advisable to prevent the threat of spams. - Antivirus: Protect your system by installing a legitimate antivirus software with malware detection capabilities. Performing regular scans is recommended while using antivirus software. - Application downloads: It is advisable to download any third-party application from a recognized source rather than a pirated version. Such software from unreliable sources is a huge vulnerability that exposes your system to a hacker. - Public Wi-Fi: The use of Public Wi-Fi without having VPN is highly risky. Thus, using VPN while using Wi-Fi transfers the user’s data in encrypted form.
<urn:uuid:efff77b4-e255-41bf-a2b2-7bafc2e51c1c>
CC-MAIN-2022-40
https://www.lifars.com/2020/07/less-than-a-third-of-americans-concerned-about-data-security-during-covid-19-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00313.warc.gz
en
0.956858
1,185
3.046875
3
This can be explained by two ways: 1. The law of large numbers says that for larger sample the deviation from the expectation is smaller. 2. Look at a normal approximation to the Erlang B distribution. the variance is oposite relation to N, again saying that for large population the deviation is getting smaller. I hope it will help you. Best regards .
<urn:uuid:f22ca96a-4821-4025-9862-f05e60b0ac54>
CC-MAIN-2022-40
https://www.erlang.com/reply/32312/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00313.warc.gz
en
0.920954
80
2.515625
3
Organizations acquire and store massive amounts of data. Numerous critical business procedures within the organization depend on the accuracy and comprehensiveness of this stored data. There are several ways in which the data’s accuracy can be harmed. If this data is altered or deleted by a third party without authorization, the consequences for the business could be severe, especially if the compromised information was of a sensitive nature. Thus, it is crucial for a company to protect the accuracy of the data it stores by implementing the necessary security measures. This article covers in-depth information about data integrity along with details on its significance, various types, and several techniques that can be used for the preservation and verification of data integrity. Fundamental Cybersecurity Concepts (172) Artificial intelligence is increasingly being used in almost every sector of business and industry globally. The adoption of artificial intelligence in the cybersecurity sector has also been influenced by this rise. The cybersecurity landscape has seen a tremendous shift as a result of AI. In today’s business contexts, there is a significant and quickly expanding surface for cyberattacks. This indicates that more than just human interaction is required for cybersecurity posture analysis and improvement within a company. Since these technologies can quickly analyze millions of data sets and find a wide range of cyber threats, Artificial Intelligence and Machine Learning are now becoming crucial to information security. Nowadays, AI is being included into a wide range of products and applications that are employed in effective threat identification and cyberattack prevention. This article discusses the foundational ideas of artificial intelligence, its function and applications in the field of cybersecurity, and how AI can be applied to enhance an organization’s overall security posture. Sysmon is a component of the Microsoft Sysinternals Suite that runs as a kernel driver and may monitor and report on system events. Businesses frequently utilize it as part of their tracking and logging systems. Windows logs include a plethora of structured data from many log sources. Event logs capture events that occur during system execution to analyze system activity and troubleshoot faults. This blog article will teach you about common logs and how to examine crucial events in your system. SIM cloning simply involves making a copy of the original SIM from it. It resembles switching SIM cards. Software is utilized to duplicate the genuine SIM card in this technically complex process. The International Mobile Subscriber Identity (IMSI), is used to identify and authenticate subscribers on mobile telephony, and the encryption key of the victim is accessed. The fraudster may take control and use the mobile number to track, monitor, listen to calls, place calls, and send messages by cloning the SIM. Because ransomware attacks target the endpoint, encrypt its data, and demand a payment to decrypt them, endpoint protection is more crucial than ever. The COVID-19-driven shift to remote work has also raised endpoints’ susceptibility to cyber threats and turned them into an organization’s first line of defense. An extensive endpoint security plan that can handle the contemporary cybersecurity concerns that endpoints encounter must be created and implemented by organizations. And a vital part of this approach is the implementation of an endpoint protection platform (EPP). SIM swapping, often referred to as SIM splitting, SIM jacking, or SIM hijacking, is the process of transferring control over your mobile device from your current SIM to another SIM under the control of a cybercriminal. Through this deception, a cybercriminal acquires access to your private information relating to your finances. A hotfix is an immediate repair for a problem or defect that often skips the standard software development cycle. Hotfixes are often applied to high- or severe-priority defects that need to be fixed very away, such as a fault that compromises the software’s functioning or security. Because there is a lot to test and not enough time to accomplish it, software development teams always produce flaws or bugs. The company ranks reported problems and flaws as critical, severe, high, medium, or low as they come in (or other similar terms). Depending on the release timetable, critical flaws typically call for a hotfix. An operating system that offers enough support for multilayer security and proof of accuracy to satisfy a certain set of government standards is referred to as a trusted operating system (TOS). Before discussing how to secure Hyper-V, below is the terminology of Hyper-V and related technologies. This article will give a comprehensive explanation for these terms, not only in the context of this book but also in the context of how they are typically used in legal documents and by professionals. Many people consider security to be a clearly evident necessity. Others’ needs may not be as obvious. Many decision-makers don’t think that the product of their company needs extensive protection. Many administrators think the default security measures are adequate. While certain organizations’ needs may not necessitate a rigorous program of safeguards, no one can avoid doing their research. As we continue to use our phones, laptops, and tablets as the hub for all of our digital lives, it becomes increasingly important to protect these devices. Setting passcodes or passwords alone can no longer suffice in protecting the sensitive data housed in our devices, this has given rise to a need for additional layers of stricter and safer controls to fight against unwanted breaches to our critical data, this is where two-factor authentication comes in. Businesses are under pressure from customers to be more transparent about the data they gather, and authorities are responding. Stakeholders may be assured that your firm takes data privacy seriously by looking for ISO 27701 certification. Consumers produce enormous amounts of data every day in today’s globally linked environment. But worries about how businesses collect, utilize, and safeguard this data are growing. Governments all around the globe are enacting comprehensive legislation to guarantee the privacy and security of personal data in response to popular demand. These include, but are not limited to, the California Consumer Privacy Act, the General Data Protection Regulation (GDPR), and the General Data Protection Law (LGPD) of Brazil (CCPA). UAC stands for User Account Control. User Account Control is a mandatory access control enforcement facility of the window machine that helps to prevent malicious software from damaging a PC. UAC’s job is to prevent a program from making changes to its system without authorization from the administrator. If a program is trying to do something which is a system-related change, it will require administrator rights. If the administrator does not approve the modifications, then the changes will not be implemented, and Windows remains unmodified. As 2019 came to a close, organizations all around the world underwent a significant transformation in how they carried out their business. Nearly every corporation was impacted by the COVID-19 pandemic, and each one struggled to maintain operations while also introducing a new work culture known as remote work or work from home. Remote work became the new normal for many organizations with many analysts predicting that remote work is the future and it is here to stay. A non-disclosure agreement (NDA) is a contract that is enforceable under the law and creates a confidential relationship. The signatory(s) agree that any sensitive information they may collect will not be disclosed to any third parties. A confidentiality agreement is another name for an NDA. Wireshark is a packet analyzer and sniffer. It captures network traffic on the local network and stores it for further analysis. This tool is mostly used by system administrators and IT professionals to troubleshoot network errors in real-time. However, Wireshark also provides a powerful command-line tool called Tshark for people who enjoy working on the command line interface. Tshark also lets you capture packet data from a live network, and read packets from a previously captured file. The native capture file format of TShark is libpcap, which is also used by tcpdump and other utilities. The TCP three-way handshake is a three-step sequence that must be completed between a client and a server in order to create a safe and trustworthy TCP connection. In this blog post we are going to analyze how does a three-way handshake looks like, and discuss the flags and packet bytes. Let’s start with taking a look at the syn packet… This blog page will discuss the characteristics of biometric factors such as universality, uniqueness, permanence, collectability, performance, acceptability, and circumvention. The Parkerian hexad is a set of information security elements that are comprised of confidentiality, integrity, availability, possession/control, authenticity, and utility. As you might already guess there are three additional attributes that are proposed by Donn B. Parker. We can say that the Parkerian hexad is a more complicated variant of the traditional CIA triad. In this blog post, you will be provided with the parkerian hexad and its components. Wireless access to the internet has increased and evolved over the years with the advances in technology. Free and easily accessible WiFi networks are available in public places such as coffee shops, hotel lobbies, airports, shopping malls, and much more. People are naturally inclined to connect to these networks to check their emails, browse the internet or perform any other important task. However public WiFi networks much like any other publicly available service might not be totally secure and carry their potential risks. An agreement between two or more parties that is detailed in a formal document is known as a memorandum of understanding (MOU). Although it does not have legal force, it expresses the parties’ will to proceed with a contract. As it outlines the parameters and goals of the negotiations, the MOU may be viewed as the beginning point for negotiations. These memos are most frequently used in international treaty negotiations, but they may also be applied in risky corporate negotiations like merger discussions. There are seven steps in the attack lifecycle that are present in most breaches. Although not every attack includes all seven steps, this lifecycle can be modified to meet any incidence. Additionally, an attack’s phases don’t necessarily occur in the sequence that is indicated by the attack lifecycle. This chapter introduces the idea of the attack lifecycle since it is crucial to consider occurrences in relation to the various stages of the lifecycle. You’ll be able to comprehend the context of every newly uncovered action in relation to the overall compromise more clearly by thinking in terms of the different phases. Additionally, you should consider attacker activity in each phase of your repair planning. The organization’s data governance plan establishes the rules for data usage and security, which are then enforced by a data steward. Data stewards provide a blend of data science, engineering, and communication abilities to work with teams throughout the organization and promote fresh approaches to improve data utilization. Being a point of contact between the business-focused and IT sides of the organization is one of the data steward’s most important contributions. In order to effectively cooperate with many stakeholders, they must not only be knowledgeable about the technical elements of data management but also display good interpersonal skills. Businesses are becoming more and more aware of the need of using data to inform both short- and long-term strategic choices. Finding useful uses for this data is even more urgent due to the growth in the number and variety of data that a company processes.Businesses have turned to data governance and data stewardship as essential components of their overall data management strategy to guarantee they are making the greatest use of all available resources. These fundamental data ownership tasks are being transferred to data professionals, such as data stewards who manage data stewardship inside an organization. Every enterprise deals with a variety of security risks to its valued assets. In order to compromise the security of these assets, malicious adversaries can make use of security flaws in the infrastructure of the company. For the company to effectively detect and stop these cyberattacks, its security response must be robust and resilient. Cyber threat intelligence is one of the instruments that the company can use to improve the resilience of its incident response efforts. The objective of cyber threat intelligence is to disseminate knowledge about the motives, capabilities, tactics, and methodologies of various cyber threats in order to direct the selection of remediation strategies. Cyber threat intelligence is a tool used by threat analysts to collect information about these threats, analyze this information, and then deliver the findings to key stakeholders. The diamond model of intrusion analysis enables the threat analysts to present this information in a manner that is organized, effective and simple to comprehend. This article presents the basics of the diamond model, its main components, optional features, and how this model can be used by security professionals. The level of access that users and system processes have to files in Linux is governed by file permissions, attributes, and ownership. This is done to make sure that only legitimate users and programs are able to access particular files and directories. The top management of a company bears primary responsibility for the safety and security of its valued assets. Therefore, the backing of top-level management and their understanding of the many cyber risks faced by the company are the cornerstones of an effective security management programs in a company. The senior management of the organization must be aware of the many risks to its assets, as well as those risks’ consequences and potential financial losses. Understanding how to assess HTTP may help with numerous security activities, such as detecting attack vectors, recognizing malicious HTTP requests, and detecting user-agent abnormalities. In this blog post, we are going to learn HTTP requests by capturing packets in Wireshark. Let’s start with a quick refresher on the most important characteristics of HTTP. A collection of information known as personally identifiable information (PII) can be used to identify a particular person. It is classified as sensitive information and is the data used in identity theft. The user’s name, address, and birthday can be considered PII, as can other private information like their complete name, address, social security number, and financial information. PII is a target for attackers in a data breach because of its high value when sold on darknet markets. Secure/Multipurpose Internet Mail Extension (S/MIME) is an industry standard for email encryption and signing that is commonly used by organizations to improve email security. Most corporate email clients support S/MIME. S/MIME encrypts and digitally signs emails to confirm that they are verified and that their contents have not been changed in any manner. In this article we will discuss the security functions a SOC is responsible for. A group of IT experts and information security specialists that analyse, monitor, and defend a company against cyber-attacks work out of a centralized location known as a security operations centre (SOC). SOC teams handle incident response while continuously keeping an eye on networks, internet traffic, servers, desktops, databases, endpoint devices, applications, and other IT assets in case of security incidents. SOC personnel typically possess all the knowledge and abilities necessary to recognize and address cybersecurity events. To share information regarding occurrences with the proper stakeholders, they work in tandem with other departments or teams. The majority of SOCs run continuously, with staff members working in shifts to oversee log activity and minimize threats. Some businesses use outside vendors to handle their SOC. The use of SOCs is a crucial tactic for reducing the expenses associated with data breaches. They support organizations in quickening their response to intrusions and continuously enhance threat detection and prevention techniques. Web applications such as e-commerce websites or websites that use Content delivery networks receive a large number of user requests from around the world. In order to deal with growing user requests and balance the load of the main server, web applications use different caching techniques. Caches are typically employed by the use of proxy servers or web browsers to store files such as images, videos, or audio files and different frequently accessed files in the local storage. Web caching is an effective technique that is used to handle growing user requests, improve network capacity, and provide a seamless user experience. This article covers the basics of web caching, how the attackers perform web cache poisoning attacks, the impact of web cache poisoning, and the recommended mitigation techniques. In this blog post, we are going to explore Windows Event Forwarder (WEF) and Windows Event Collector (WEC) which you can utilize in your agentless log collection efforts. First, we will discover Windows Event Forwarder and Windows Event Collector, then illustrate a basic XML analysis. Lastly, we will also take a look at how to configure subscriptions. In today’s blog post we are going to take a look at what an internet control message protocol request and replies can be identified using Wireshark. By gathering threat data from many sources, security orchestration, automation, and response (SOAR) assists enterprises in automating security activities, particularly incident response. It may also respond to minor events without the need for human intervention. SOAR solutions assist security companies in defining, prioritizing, standardizing, and automating response activities, as well as improving operational efficiency. Here are listed a few ways SOAR may aid in the optimization of security operations… The System Logging Protocol (Syslog) is a standard message format that network devices can use to interact with a logging server. It was created primarily to make network device monitoring simple. A Syslog agent may be used by devices to send out notification messages under a variety of scenarios. These log messages contain a timestamp, a severity rating, a device ID (including IP address), and event-specific information. Despite its flaws, the Syslog protocol is extensively used because it is easy to develop and very open-ended, allowing for a variety of proprietary implementations and hence the ability to monitor practically any connected device. Syslog is compatible with all Unix, Linux, and other *nix operating systems, as well as MacOS. Although Windows-based servers do not natively support Syslog, various third-party applications are available to allow Windows devices to connect with a Syslog server. Blockchain is a distributed, decentralised digital ledger that records transactions as blocks. Because of its immutability and availability to only authorised individuals, this ledger aids in the storage of information in a transparent manner. What is the intelligence cycle? In the most basic terms, the intelligence cycle is an important step that we can utilize in most security organizations to convert raw data into polished intelligence for judgment. The procedure is a five or six-step way of imparting clarification to a changing and unclear situation. In this blog post, we are going to cover these steps of the cycle of intelligence so that we can enhance decision-making capabilities more efficiently. For cyber security professionals, it is important to gain an understanding of the various web technologies. Many cyber-attacks are carried out on websites and web servers for various reasons. It may be to retrieve credentials to get access to systems, to gain Personal Identifiable Information (PII). Or it may be to simply use a website to have users click on links that execute malicious code or that link to another, malicious, website. Cyber security professionals need to know how to protect systems and businesses against these kinds of attacks. SNMP is the acronym for Simple Network Management Protocol. It is a network protocol used for network management. SNMP operates in three different levels or versions. These include SNMPv1, SNMPv2, and SNMPv SNMPv3 is the most recent and most secure version of the SNMP. If a business has 1000 devices, checking each one individually every day to see if they are operating correctly or not is a time-consuming operation. Simple Network Management Protocol (SNMP) is used to help with this. Protocol analysers are essential tools for embedded system designers. They let engineers obtain insight into the data that goes through the communication channel or bus, such as USB, I2C, SPI, CAN, and so on. Some systems record this information and then show it after capture, whilst others display the data as it is transmitted in real-time. As a result, it’s simple to see why protocol analysers are so vital in embedded design, development, and debugging processes. Baselining is the practice of analysing the network on a regular basis to ensure that it is functioning properly. It generates a number of reports that provide extensive information on the network. Authentication attacks try to guess the correct username and password. Brute Force attacks, the most basic type of authentication attack, aim to acquire access to an account by attempting random passwords. Threat actors utilize algorithms to automate this process, which can result in millions of passwords guesses every day. The first line of protection is authentication. It is the process of determining whether or not a user is who they claim to be. Not to be confused with the phase that comes before it, authorization, authentication is just a technique of validating digital identification, so users have the amount of rights they need to access or accomplish a job. There are several authentication systems available, ranging from passwords to fingerprints, to check a user’s identity before granting access. This offers another layer of security and avoids security shortcomings such as data leaks. However, it is frequently the combination of many methods of authentication that offers safe system reinforcement against potential attacks. The process of authenticating a user’s identification is known as authentication. When you log in to a website, the website uses your username and password to verify your identity. You will be unable to log in if the website is unable to authenticate your identity. There are several methods for confirming a user’s identity. Using a username and password is the most popular technique. Passwords can be guessed or stolen, making this the least secure option. Other technologies, like fingerprint and iris scanners, are becoming increasingly common. A premise system is a sort of security system that is installed in a building or other type of location. The system is intended to keep unauthorised people out of the premises and to keep the inhabitants safe. The technology may also be used to monitor resident activity and detect intruders. Office buildings, factories, warehouses, and retail outlets are all places where premises systems are employed. They’re also common in residential settings like flats and houses. The amount of security offered by a premise system is determined by the type of system deployed. Door and window sensors, as well as motion detectors, are common components of basic systems. Cameras, alarm systems, and access control are examples of more advanced systems. Industrial control systems (ICS) are assets and accompanying instruments that aid in the supervision of industrial operations. Supervisory control and data acquisition (SCADA) systems, which assist organisations in controlling dispersed assets; distributed control systems (DCS), which control production systems in a local area; and programmable logic controllers (PLCs), which enable discrete control of applications using regulatory control, are three types of ICS. The threat modeling activity has a consistent plan that may be broken down into many basic elements. In this blog post, we are going to give a basic outline of a threat modeling process. In a previous blog post, we made an introduction to powerful traffic analyzer Wireshark by identifying what an ARP request looks like. In this blog post, we will continue network traffic analysis by taking a look at what an ARP reply traffic looks like in Wireshark. Upon completion of this blog page, you will enhance your network analysis skills as a security operations analyst. In this blog post, we will cover the necessity of developing an appsec awareness and education program and how it may assist to equip the staff to integrate security into all development activities. It’s critical for security analysts to grasp the substantial influence IoT devices can have on expanding a company’s attack surface. As companies place greater trust in smart devices to provide insight into their resources, they must take extra care to safeguard their systems and ensure the authenticity, validity, and accessibility of the data that travels across them. A packet-level examination is the best technique to comprehend the dissemination of data in network communications. In this blog post we are going to take a look at the benefits of packet-level analysis, and make a quick introduction to Wireshark GUI referencing their counterparts in Open Systems Interconnection (OSI) model. In this blog post, we are going to learn what sinkholing is and how it can help secure our networking environments. An embedded system is a computer system that performs a specific purpose within a broader mechanical or electrical system, sometimes with real-time processing requirements. It is frequently incorporated as part of a larger device that includes hardware and mechanical components. Many items in regular usage today are controlled by embedded systems. Devices with embedded systems include the following: automobiles, telephones, digital watches, rich media players, video game consoles, computers in numerous appliances, point-of-sale terminals, digital cameras, GPS receivers, medical electronics, autopilots, and aeronautics. The demand for safe and dependable internet-of-things (IoT) devices is greater than ever as the world becomes more interconnected. Unfortunately, several IoT devices have security weaknesses that make them open to attack. The fact that many IoT devices are created with little to no security in consideration is a significant problem. This exposes them to a variety of assaults, including denial-of-service attacks and malware and viruses. The fact that many IoT devices are not adequately updated or maintained is another issue. They frequently run out-of-date software, which implies that attackers can take advantage of them. And last, a lot of IoT devices are just not made to be secure. The science and technology of monitoring and interpreting biological data is known as biometrics. Biometrics in information technology refers to techniques used for authentication and identification that measure and analyse physical attributes of the human body, including DNA, fingerprints, eye retinas and irises, voice patterns, face patterns, and hand measurements. Throughout history, people have employed biometrics. People used to recognize one another in the past, for instance, based on their distinctive physical characteristics. Biometrics are utilized nowadays for security and identity purposes. For instance, several nations now provide biometric passports that include each user’s specific biometric information. As the world becomes increasingly digital, the need for strong security measures is more important than ever. One such security measure is WPA3, which was designed to replace the WPA2 protocol. WPA3 features improved security through the use of Perfect Forward Secrecy or Forward Secrecy. Perfect Forward Secrecy or Forward Secrecy is a security measure that ensures that even if one key is compromised, the rest of the keys remain secure. This is accomplished by using a different key for each session. WPA3 uses this security measure to make it more difficult for attackers to gain access to sensitive data. Any malicious attack that targets a wireless network or devices that utilise wireless technology is known as a wireless attack. Wireless assaults can be used to get unwanted access to sensitive information or to impair a wireless network’s normal operation. Wireless assaults come in a variety of forms, some of which are given below. Security patches and software upgrades are the first locations cybercriminals seek for weaknesses in previous versions that they may target. When an attacker notices that an issue has been resolved, anyone who has not applied the solution becomes a target. Having all of your applications up-to-date can minimize the vast majority of known online threats. In this blog post, we age going to learn what is patching, and what is a patch management lifecycle. As we already know, a weakness is a flaw in a program that lets a hostile individual conduct an unauthorized activity or obtain access to data. Wouldn’t it be nice if a software application company remediate a vulnerability in a technology or service it provides before a malicious actor does? Well, you can achieve this in many ways such as hiring internal security professionals, paying for a penetration testing as a service platform and so on. In today’s blog post, we are going to add another group of skilled individuals who are called bug bounty hunters and define who they are, what is bug bounty hunting, what is a bounty-hunting program and what are some differences between a vulnerability rewards program (VRP) and A vulnerability disclosure program (VDP) are. A system strengthening strategy is a dynamic endeavor and needs continual assessment of the network and local assets. It is vital that we examine what we can do at the endpoints rather than depending just on the identification of an exploit. In our previous blog post on network protocols we covered the definition of a protocol, why they are important and began explaining some common protocols; TCP/IP and UDP. These are important network protocols as most other protocols rely on TCP/IP or UDP to perform their role. Our second blog post on the subject of network protocols covered some additional network protocols and their reliance on either TCP/IP or UDP (or both). In this blog post, we continue to discuss network protocols that rely on either TCP/IP, UDP, or both. This blog page will provide you with what is validation process, what are possible vulnerability scanner results, and why vulnerability findings may be misleading. In this blog page, we are going to take a look at what is an endpoint device, what are three levels of security, the areas cybercriminals are most likely to target, and lastly how to harden our workstations. It’s easy to forget that physical security is still a crucial aspect of keeping our information safe as our lives become increasingly online. Physical security is more crucial than ever in a cyber ecosystem. Consider this: if a hacker has physical access to your computer, they will be able to overcome any software security mechanisms. As a result, it’s critical to take precautions to protect your computer and work environment. Here are a few suggestions for ensuring physical safety: - When you’re not using your computer, keep it in a closed room or office. If you must leave your computer alone, ensure that it is secured and that the screen is covered. Wireless technologies allow two or more devices to interact without the use of physical touch or a connected connection. Mobile phones, computer networking, wireless sensor networks, and other applications use wireless technology. There are a variety of wireless technologies available, each with its own set of benefits and drawbacks. Bluetooth, WiFi, and infrared are the most prevalent wireless technologies. Bluetooth is a wireless communication technology that allows two devices to connect over a short distance. Bluetooth is found in a variety of devices, including cell phones, headsets, and wireless keyboards and mouse. In this blog article, we will discuss the basics of how to safely use web browsers. Web applications are used for a wide range of purposes by individuals and different organizations. These web applications provide multiple benefits to their users as well as various functionalities. They are, nonetheless, vulnerable to malicious adversaries’ attacks. The exploitation of some of these security weaknesses can impede the organization’s key business processes, resulting in significant financial losses. As a result, it is very important to identify and fix such vulnerabilities discovered in the web application. File inclusion vulnerabilities are one of the most common types of vulnerabilities. This article discusses the many forms of file inclusion vulnerabilities, as well as their consequences and how to protect against them. Servers need particular security implementations, because obtaining the information stored on the servers is the most profitable objective for an attacker. This blog post will help you secure your servers with simple and clear guidelines. The act of monitoring and capturing data packets as they travel across a network is referred to as network sniffing. This can be done for various reasons, including network troubleshooting, monitoring user behavior, and stealing sensitive data. Network sniffing can be used to identify the source of error when troubleshooting a network. In this article we will describe what vulnerabilities are, and best practices for managing them. A set of subroutine definitions, protocols, and tools for developing application software is known as an application programming interface (API). It represents a set of communication protocols between different software components. A good API makes it easy to create a software by supplying all of the necessary building components, which the programmer then assembles. A web-based system, operating system, database system, computer hardware, or software library may all have APIs. The API allows a programmer creating an application program to send a request to the operating system. The operating system then forwards the request to the appropriate software component, which completes the task and returns a response to the programmer. Employees in an organization are provided access to various resources and applications in order to perform their day-to-day responsibilities. Instead of requiring the user to create a different set of credentials to access each application or different resources, the organization employs SSO (Single-Sign-On) and federated identity management technologies which results in smoother access to these resources/applications. These technologies when implemented correctly, increase functionality and protect organization’s valuable assets. This article covers the federated identity management concepts, different frameworks used for its implementation, and the security challenges related to it. There are different kind of forms of covert communication that involve the use of any medium to hide something. Cryptography and steganography are often used together to conceal crucial data. Both have nearly the same aim at their heart, which is to safeguard a message or information from third parties. They do, however, safeguard the information via a completely different approach. Phishing, like fishing, is a technique used to “fish” for usernames, passwords, and other sensitive information from a “sea” of users. Because hackers frequently use the letter “ph” instead of the letter “f,” they were initially dubbed “phreaks.” Phishing (pronounced “fishing”) is one of the social engineering attacks that attempt to steal your money or identity by tricking you into disclosing personal information – such as credit card numbers, bank account information, or passwords – on websites that appear to be legitimate. A password is a protected string of characters that is used to authenticate a user. Passwords are the most widely used authentication method, yet they are also the weakest. Users sometimes choose passwords that are exceedingly easy to guess or are based on personal information about the user (e.g. their birth month or the name of their pet). They may even scribble it down on a piece of paper, hide it in a location where it may be easily taken, or share it with others. Organizations that utilize passwords as the primary or one of the sources of authentication should implement proper security controls to prevent them from being compromised as it can have severe implications. This article will go over the basics of password cracking, as well as the techniques and tools used to crack passwords and password protection mechanisms. An Intrusion Detection System (IDS) is a type of security device that analyzes data packets and compares them against known signatures. The goal of an IDS is to detect intrusions before they cause damage. In this blog post we will explain how an IDS can identify harmful events using Knowledge-based detections and Behavior-based detections, along with the different aspects, benefits, and drawbacks. This blog will explain why you should quit logging in as root at all times and provide the best security alternative to doing so. People, processes, and technology are the three essential components of a security program. In order to provide effective security protection to an organization’s assets, all three components must function together. Humans, according to several security experts, are the weakest link in the security chain. Human errors can sometimes have a devastating impact on an organization’s security even if you have the best technologies in place defending your assets. These errors might occur as a result of carelessness, a lack of security awareness education, or excessive permissions. According to research conducted by Stanford University, nearly 88 percent of all data breaches are the result of human error. When it comes to defending your company against various attack vectors, it is critical to keep the human factor in mind. This article goes over the basics of social engineering attacks and how to prevent them. The network operating system (NOS) is a software-based networked environment that allows many workstations and computing devices to share resources. In 1990, Microsoft released Windows NT 3.0 that featured a NOS environment. Many aspects of the LAN Manager protocols and the OS/2 operating system were merged in this product. Over the next few years, the Windows NT NOS slowly evolved into Active Directory that was first formally deployed in Windows Server 2000. Reverse Engineering (RE) has been the leading technique for understanding the structure and operation of malicious programs and what they’re programmed to do for a very long time. Sandboxing is a technique in which you build an isolated test environment, or “sandbox,” in which you execute or “detonate” a suspicious file or URL attached to an email. The sandbox should be a safe, virtual environment that closely mimics the CPU of your production servers. PSECaaS (Security as a Service) is best defined as a cloud-based approach for outsourcing cybersecurity services. SECaaS, like Software as a Solution, is a subscription-based security service hosted by cloud providers. For corporate infrastructures, Security as a Service solutions have grown in popularity as a method to relieve in-house security team duties, scale security demands as the organization expands, and avoid the costs and upkeep of on-premise alternatives. Platform as a service (PaaS) is a cloud computing model in which users receive hardware and software resources from third-party vendors over the internet. These tools are often required for application development. As a result, PaaS eliminates the need for developers to set up in-house gear and software in order to create or execute a new application. Software as a service (SaaS) is undeniably changing the way we think about and use software. We no longer have to install and maintain software on our own computers or devices; instead, we may get it over the internet, generally for a fee. This move has huge repercussions for organizations as well as individual individuals. Shellcoding is a form of system exploitation in which an attacker inserts malicious code into a program or file in order to execute arbitrary commands. Shellcode is often used to create a backdoor in a system, allowing the attacker to gain access and control. In many cases, the attacker will encode the shellcode to avoid detection. NIST (National Institute of Standards and Technologies), a division of the United States Department of Commerce, is in charge of developing metrics, standards, and technology to promote innovation and competitiveness in the field of science and technology. With the number of cybersecurity attacks on the rise, the NIST Cyber Security Framework was created to assist various organizations in improving their security posture. NIST CSF was created in conjunction with security professionals from the private sector and government agencies, and it is currently being used by a growing number of businesses throughout the world to design their own security frameworks. This article delves into the specifics of this framework as well as the advantages of implementing it. If your company, like many others, is always searching for methods to enhance efficiency and save expenses. Moving to a cloud-based infrastructure is one approach to do this. Infrastructure as a service (IaaS) is basically cloud computing and allows enterprises to access, manage, and use infrastructure resources in a scalable, pay-as-you-go approach. IaaS is an excellent choice for companies who want to shift to the cloud but don’t want to give up the control and flexibility that comes with owning their own infrastructure. You may decide how much or how little of your infrastructure to migrating to the cloud using IaaS. PowerShell is a Microsoft.Net framework-based open source command-line shell and scripting language. PowerShell is a popular tool for automating tasks and configuring systems. IT professionals use PowerShell to carry out their tasks in the same way as Command Prompt in Windows. It also saves time and effort for system administrators by automating daily repetitive tasks that need to be performed on various workstations and servers. PowerShell also provides complete access to the WIN32API (Windows Application Programing interface). The WIN32API is an application programming interface that allows you to access important Windows functions. Security awareness is a necessity for security training. Improvements in user activity are required for the optimal adoption of a security system. Such adjustments largely consist of modifications to typical job tasks in order to be consistent with the security policy’s standards, rules, and procedures. User-behavior improvement needs some kind of user education. To create and manage security education, and consciousness, all important components must be widely understood. Furthermore, plans of presentation, integration, and execution must also be designed. Web application firewalls are some of the most recent developments in the field of firewall technology (WAFs). In this blog post, we will define what a web application firewall is, how it functions. We will also cover some of the benefits of using a web application firewall. Any cloud infrastructure architecture that comprises both public and private cloud solutions is referred to as a hybrid cloud. The resources are usually managed as part of a larger infrastructure environment. Based on corporate business and technical policies, apps and data workloads can share resources between public and private cloud deployments, to ensure security, high performance, scalability, low cost and efficiency. Organizations can employ private cloud environments for their IT workloads and supplement the infrastructure with public cloud resources to manage periodic surges in network traffic, which is a popular example of hybrid cloud. Alternatively, you might save money by using the public cloud for non-critical tasks and data while using the private cloud for sensitive data. As a result, access to additional processing capacity is provided as a short-term IT service via a public cloud solution, rather than requiring the high CapEx of a private cloud system. The ecosystem is connected smoothly to enable optimal performance and scalability in response to changing business needs. Any cloud system dedicated to a single enterprise is referred to as a private cloud. You are not sharing cloud computing resources with any other enterprise in the private cloud. The data center resources may be on-site or off-site, and managed by a third-party vendor. The computing resources are not shared with other clients and are delivered via a secure private network. The private cloud can be customized to match the organization’s specific business and security requirements. Organizations may run compliance-sensitive IT workloads without sacrificing the security and speed traditionally only accomplished with specialized on-premise data centers, thanks to increased visibility and control over the infrastructure. Cloud computing is a broad word that encompasses a variety of categories, types, and architecture models. This networked computer approach has changed the way we operate, and you’re probably already using it. However, cloud computing isn’t just one thing… Cloud computing has come to be a big topic in the business and tech world in the past few years. Cloud computing is the delivery of computing services such as servers, storage, databases, networking, software, analytics, and intelligence via the Internet (“the cloud”) in order to provide faster innovation, more flexible resources, and economies of scale. Every day, an organization’s assets are exposed to a variety of security threats. These threats can damage the assets by exploiting vulnerabilities present in them. The probability of these threats exploiting the assets’ weaknesses and the resulting impact is referred to as risk. Security controls are employed to mitigate this risk. There are various types of security controls, each of which serves a distinct purpose. The article aims to explain what security controls are, their various types, and what functions they provide. It also discusses how these controls can be combined to provide the organization with defense-in-depth protection for its assets. An advanced persistent threat (APT) is a type of attack campaign in which an unauthorized user gains access to a network and remains there undetected for a prolonged period of time. These attacks are often orchestrated by highly skilled and well-funded adversaries and are designed to achieve specific objectives, such as espionage or data theft. While APT attacks can be difficult to detect and defend against, there are a number of steps organizations can take to reduce their risk of becoming a victim. SMTP stands for Simple Mail Transfer Protocol. This protocol allows email messages to be sent from one computer to another. The Internet was originally designed to allow computers to communicate directly with each other, without human intervention. In order to send emails, you need to know the address of the recipient (the person or company who receives the message). Security Information and Event Management (SIEM) is a software system that combines Security Information Management (SIM); an automated process of collecting data of log files into a central archive, and Security Event Management; a type of computer security that monitors, correlates and notifies users of events as they occur in a system; to collect, analyze and report on all security-related events happening in an organization. The goal is to provide real-time monitoring of security devices such as firewalls, antivirus software, intrusion detection systems, and other network-based systems for potential threats. This post will explore the benefits of implementing a SIEM in your business by highlighting some of its most important features. A keylogger is a software program that records typewritten keys and keystrokes. They are used to track what a person types on their keyboard, including passwords, credit card numbers, and other sensitive information. Some keyloggers are installed without the person’s knowledge, while others are installed with the person’s consent. Once installed, the keylogger records all keystrokes and sends them to the person who installed them. Keyloggers can be used for legitimate purposes, such as monitoring employees or children, or for malicious purposes, such as stealing passwords and credit card numbers. People frequently pause to consider how the internet works. Along with the benefits of using the internet, there are drawbacks and risks. But what happens when you browse the internet? You could be using a proxy server at work, on a Virtual Private Network (VPN), or you could be one of the more tech-savvy people who always uses some kind of proxy server. Perimeter Security technologies offer a wide range of security services, from basic firewall protection to end-to-end network and business security. In essence, perimeter security is a defense system built around your network to prevent malicious attacks from entering. A DMZ network connects a company’s secure perimeter to unsecured external networks like the internet. Web servers and other externally facing systems can be located in the DMZ without jeopardizing the security of internal resources. This blog post will explain what DMZs are and why they are important components of traditional network security architectures. Even if best practices are followed, DMZs are not perfect security solutions. We will demonstrate how modern security solutions based on Zero Trust are better suited to the way businesses operate today. How often do you get annoyed by spyware or adware? If you don’t want to be tracked or see ads popping up every time you visit a site, then you should definitely check out these ways to remove them. We will deep dive into spyware and adware in three sections. In this first part of the blog, we will focus on what spyware is and what spyware can do. Spyware and adware have different motivations than viruses, worms, and backdoors. Although viruses, worms, and backdoors are often harmful and attack the local host, spyware and adware are frequently motivated by financial gain. As we have covered in the previous blog, spyware is known for tracking your surfing patterns and routing your Internet browser to sites that benefit their producers. When it comes to virtual machines, one of the biggest security threats is the possibility of data breaches. These can occur when unauthorized users gain access to the system, or when malware is able to infect the system. Additionally, virtual machines can be subject to denial of service attacks, which can prevent authorized users from accessing the system. To help mitigate these risks, it is important to implement security measures such as strong authentication and authorization controls, as well as effective malware detection and prevention. Enterprise data centers are made up of many servers, the majority of which are idle because the workload is directed to only a few servers on the network. This wastes expensive resources such as hardware, power, maintenance, and cooling requirements. Virtualization increases resource utilization by dividing a physical server into multiple virtual servers. These virtual servers appear and behave as if they were individual physical servers, each with its own operating system and applications. If you’ve ever used a Windows device, you’ve probably encountered updates frequently — just before shutting down your computer. Your device may occasionally prompt you to install critical updates. There are also six yearly feature updates that are required! What exactly are these Windows Updates? What is the distinction between various types of Windows Updates? Let us now examine them. Before we get there, let’s distinguish between Windows Updates and Microsoft Updates. Application whitelisting and application blacklisting are the two main approaches to application control. With no clear guidelines on which is superior, IT administrators are frequently torn when forced to choose between the two. We’ll go over the advantages and disadvantages of both so you can decide which is best for your organization. Some businesses may station a security guard at their entrance to ensure that only employees with a valid ID are allowed in. This is the fundamental idea behind whitelisting; all entities requesting access will be validated against an already approved list and will be permitted only if they appear on that list. Employees fired for misconduct, on the other hand, are frequently placed on a banned list and denied entry. Blacklisting works in the same way: all entities that may be dangerous are typically placed on a collective list and blocked. Non-employees who attempt to gain entry, such as interview candidates, will be placed on the greylist because they are not on the whitelist or the blacklist. Based on the authenticity of their entry request, the security guard either grants or denies it. In a network, the administrator usually acts as a security guard and has complete control over everything that enters it. Why do we need security roles and responsibilities? In this blog post, we will answer what security roles and responsibilities are and how important they are to the organization. SDLC, or Software Development Life Cycle, is a set of procedures for developing software applications. These steps break the development process down into tasks that can be assigned, completed, and measured. “Bring Your Own Device,” or “BYOD,” has become a popular business topic as more and more employees use their personal smartphones and tablets for work. While BYOD has many advantages for businesses, such as increased productivity and flexibility, it also has some drawbacks. The process of managing or screening access to specific emails or webpages is known as content filtering. The goal is to prevent access to content that contains potentially harmful information. Organizations commonly use content filtering programs to control content access through their firewalls. Home computer users can also use them. To prevent access to information, content filtering can be implemented as hardware or software and is frequently built into internet firewalls. Content filtering tools are used by businesses to improve security and enforce corporate policies related to information system management, such as when filtering social networking sites. Content filtering prevents internet users from accessing content that may be harmful. It restricts access to content that is considered illegal, inappropriate, or objectionable. Individual internet users, for example, can use it to protect children from graphic or inappropriate content. It also allows an organization to restrict access to pornographic content, which, if ignored, could lead to sexual harassment claims or a demeaning work environment. A cybersecurity strategy is intended to safeguard an organization’s data and systems. This includes alerts whenever suspicious activity is detected, as well as an automated response to block the attack. Unfortunately, no security system is perfect, and there will be false alarms. Technology has altered the way we live and work, as well as the way criminals operate. Criminals used to have to physically break into a building or bank to commit a crime. Criminals can now commit crimes from anywhere in the world by going online. This has made it difficult for organizations to keep up with the evolving criminal landscape. One way organizations have adapted to this change is by utilizing computer security systems designed to detect and defend against cyberattacks. These systems operate by monitoring network activity for signs of an attack. This includes both alerts and an automated response whenever suspicious activity is detected. TCP/IP has innate flaws. It was meant to run on a government network with a small number of hosts that trusted each other. Security was not a priority for the creators. However, now the network has been expanded worldwide, our most critical concern is security. Therefore, some extra measures were required to protect conversations via the Internet. In this blog, we will explain what Internet Protocol Security is and how it can offer protection over networks. Several approaches for creating a safe and authenticated channel between hosts have been presented. Finally, a better replacement to the SSL protocol was created which is TLS. In this blog we will make an introduction to Transport Layer Protection (TLS) protocol. In this blog post, we are going to explain what Secure Sockets Layer (SSL), and Secure-HTTP (HTTP-S) protocols are and how they differ from each other. One of the oldest security principles still in use today is privilege separation. Simply put, it argues that no single person should have sufficient authority to cause a catastrophic event to occur. Separation of responsibilities guarantees that tasks are distributed to workers in such a way that no single employee has complete control of a process from start to finish. Separation of tasks entails each individual having a separate job, allowing everyone to specialize in a certain area. A rootkit is a malicious software program that is designed to gain access to a computer system without being detected. Once a rootkit is installed on a system, it can be used to remotely control the system, steal sensitive data, or perform other malicious activities. Rootkits are difficult to detect and remove and can be used to establish a persistent presence on a system. This blog article will discuss two username/password authentication protocols: Password Authentication Protocol and The Challenge Handshake Authentication Protocol. By the completion of this page, you’ll know if you should use one of two methods for authentication of point-to-point packets. If you want to recover a password, the simplest method is to use a brute force attack. Is brute force really effective? Well, many users appear to utilize birthdates or other historical dates as passwords, or other readily guessed numbers or phrases. Today we are going to examine what a brute force attack is and how we can protect our systems against it. In this blog, we will take a quick look at PKI, and examine how digital certificates assist in the security of interactions between a server and an end-user, the components of a digital certificate, and the function of certificate authorities. As we have learned, a Virtual Private Network allows two networks to communicate securely across a public network. A VPN also enables a server-to-server connection, as opposed to a client-to-server connection, allowing two networks to establish an extended intranet or extranet. In this blog post, we will cover how we can connect distant branches of our company or partners securely with a site-to-site VPN. It is usually suggested to have your own private dedicated line between multiple places for safe connections. However, this approach is quite expensive since various sites must be connected by different cables, and building cables across geographies is an expensive operation. Also, maintenance is another problem. To address these issues, a virtual private network (VPN) was created. In this blog, we will define what a virtual private network is and how the tunneling process works. Data security management is a process that provides an organization with an effective means to protect the confidentiality, integrity, and availability of its data. It aims to ensure that information systems are designed, operated, and monitored in a way that protects the privacy rights and safety of individuals who use or access them. For Data Security Management to be successful, it must be integrated into all aspects of organizational governance. Importance of Data Security It assures the implementation of technologies that increase the companies’ visibility into where and how their critical data is stored and used. Cross-site request forgery (CSRF) is a type of attack that allows an attacker to do unauthorized actions on behalf of a user. A CSRF attack happens when a malicious site sends a request to a victim site, causing the victim site to perform an action intended by the attacker. This can be used to steal information, such as login passwords, or to take acts on the user’s behalf, such as moving funds from their account. For many civilian companies, integrity must be favored over confidentiality. As a result of this necessity, numerous integrity-focused security approaches were created, such as Biba and Clark-Wilson. In the following blog post, we are going to look at the Biba Model and discuss its unique characteristics for securing the integrity of data and make a comparison with the Bel-LaPadula (BLP) model. We typically define security as the total of confidentiality, integrity, and availability. These three components (which are known as the CIA triad) are the foundations of any well-designed information security practice. We adopt security policies in enterprises or individually model the CIA triad from a protection perspective. However, attackers have their own model too. This model consists of three pillars: disclosure, alteration, and denial (which is also abbreviated as the “DAD” triad). In this blog post, we are going to examine each of the DAD triad components and how they connect to their CIA triad equivalents. How do so many people find a way to access resources securely? Well, in this blog post we are going to explore a network authentication protocol that is named Kerberos. A vulnerability assessment is the process of detecting, assessing, and prioritizing vulnerabilities in computer systems, networks, and applications. A vulnerability assessment’s purpose is to offer data that may be utilized to make judgments about where to allocate resources to address vulnerabilities. A vulnerability assessment can be conducted using a variety of approaches, with the most appropriate method depending on the specific system under assessment and the assessment’s objectives. In this article let us look at different types of vulnerability assessment tools. XSS attacks are a type of injection in which malicious scripts are injected into trusted websites. XSS attacks occur when an attacker uses a web application to send malicious code to a different end user, typically in the form of a browser side script. The flaws that allow these attacks to succeed are quite common, and they occur whenever a web application uses user input within the output it generates without validating or encoding it. A buffer overflow is a type of computer security vulnerability that occurs when data is stored in a memory buffer that exceeds the capacity of the buffer. This can result in data corruption or the execution of malicious code. Programming errors are frequently the source of buffer overflow vulnerabilities. A programmer, for example, may fail to check the size of a user’s input before storing it in a buffer. If the input exceeds the buffer size, the excess data will overflow into adjacent memory locations. Buffer overflow attacks take advantage of these flaws by supplying input that is larger than the intended buffer. In the case of malicious input, this can cause the program to crash or allow the attacker to execute code on the target system. Every organization has resources that are used by various entities on a daily basis. The retrieval of information from these resources is referred to as access. These resources, however, should be accessed in a way that does not jeopardize their security. Access controls enable organizations to ensure secure access to resources. Some of the most dominant risks to systems are in the form of malicious software, which is also known as malware. Attackers meticulously construct, write, and build malware programs to breach security and/or cause damage. These programs are designed to be self-contained and do not necessarily require user involvement or the presence of the attacker to execute their damage. In this blog post, we are going to give an introduction to what is software backdoor, possible ways of getting them into our environment, its nature and scope, and some recommendations on how to prevent from them. The file system is one of the most essential parts of your operating system that you must safeguard. To secure our file systems, first, we need to understand their structure and nature. In this blog post, we are going to do a very quick primer on the introduction to file systems. Our previous blog post about network protocols covered the foundational protocols TCP/IP and UDP. In the additional parts of the Network protocols series, we will cover other protocols that almost all rely on TCP/IP or UDP to function. This article will continue with ARP, DNS, DHCP, HTTP and FTP. Network security tools ensure that you are well equipped to deal with any malicious attempts. In this blog, we are going to explore the network layer firewall mechanism and its pros and cons. A firewall is an essential component of any organization’s security infrastructure. It can be implemented as hardware, software, or a hybrid of the two. A firewall can help to protect a network from attack by restricting unauthorized incoming traffic. A firewall is often used to create a barrier between a trusted internal network and an untrustworthy external network, such as the Internet. In an organization, you can secure information in many ways. In this article, we are going to give a general overview of the importance of security models and discuss the rules for securing data using the Bell La Padula Model and its benefits and disadvantages. An information system (IS) is the full combination of software, hardware, data, people, procedures, and networks that enables the company to utilize information resources. Information can be input, processed, output, and stored using these six important components. Each of these IS components has its strengths and weaknesses, security needs, as well as unique features and applications. A network is a collection of computers that are linked together by a wired or wireless connection in order for them to communicate electronically or access shared network resources. A variety of hardware devices are used to build and extend a computer network. This article will go over the various network devices and their functions. Data Loss Prevention (DLP) is a security mechanism that detects sensitive data and alerts administrators when it leaves the network or is accessed without authorization. Endpoint security, email, cloud-based solutions, and mobile device management software are all examples of DLP products. DLP deployments may be viewed as a barrier or a delay in some workers’ and departments’ day-to-day job tasks and obligations. However, when properly configured, a DLP system may be a valuable asset to a corporation, supporting a variety of security goals and compliance. The danger of data loss (both customer and company) rises when organizations change their working environments and support their IT infrastructure with the introduction of new and innovative technology. The goal of cyber-attacks is to disable computers, disrupt computer systems, steal data, obtain illegal access to computer systems, or delete data. It steals data by exploiting flaws or misconfigurations in computer code, which leads to cybercrime. Attacks can be launched remotely from anywhere in the world. A cyber attack is designed in such way that it causes significant damage to an organization. The intrusion detection system (IDS) is a security system that monitors, detects, and protects a network from malicious activity. IDS works by alerting when an intrusion is detected. IDS are widely used to strengthen an organization’s security because they monitor all incoming and outgoing traffic for any malicious activity. There are two kinds of IDS: active and passive. Data Loss Prevention (DLP) strives to prevent unauthorized disclosure of an organization’s data assets. Unsecure access control to sensitive resources, such as files containing confidential data, makes data leakage very easy; hence, DLPs play a key role in protecting an organization’s data from unauthorized access. Different DLP solutions may differ in how they identify threats, and what action should be taken if an adversary breaches security controls. The electronic transfer of information (audio, video, or data) over vast distances between electronic equipment is known as telecommunication. Data can be transmitted using either wired (coaxial, ethernet, or fiber optic connections) or wireless techniques. Telecommunication and networking employ a variety of processes, devices, software, and protocols. Over time, different models have evolved to better describe data flow between devices utilizing various protocols. A protocol is a set of instructions or rules that govern data transmission between electronic devices. Most operating systems and protocols adhere to the OSI model as an abstract framework. The purpose of this article is to explore this model and how it may be used to visualize the process of data transmission over a network. Data Governance involves defining policies that govern how data generated within organizations is managed, accessed, shared, and owned. Governance refers to rules, regulations, and procedures developed to achieve organizational objectives. This type of policy sets out conditions under which organizational processes, functions, and services operate. After identifying a target system and conducting early intelligence gathering, the hacker can focus on gaining access to the target system. We can think of scanning as an extension of reconnaissance, in which the attacker gains a wide array of data like: which operating system is in use, active services, any configuration vulnerabilities. After the collection of this useful information, the hacker can plan an attack strategy based on these findings. Security testing is the process of assessing and testing software’s security by discovering and mitigating various vulnerabilities and security concerns. Security testing’s main purpose is to ensure that software or applications are resistant to cyber-attacks and may be used safely. Network protocols determine how data is transmitted between devices in a network. These protocols allow devices to communicate with each other without any regard for the device’s design or internal workings. Networking protocols play a critical role in today’s digital communications. Ransomware is a type of malicious software that is either cryptographic or locker-based. Cryptographic ransomware encrypts the victim’s system, devices, folders, and or files, making them unable to read and use without a key. Locker ransomware locks the screen and ignores user input. After a successful attack, the adversaries usually demand a ransom from the victim for decryption or unlocking. Ransomware is often delivered by email as an attachment, but it may also spread via social media messages, pop-ups, or infected websites. The ransomware process usually starts with executing a malicious file on the victim’s computer. This file will download other files that connect to a malicious server. After encrypting files or locking systems, notes are released to inform users about the ransomware and the payment procedures to receive the key. An example of a large-scale ransomware attack is WannaCry in 2017, which infected approximately 230,000 computers in over 150 countries. Zombies and botnets are two of the most popular forms of malware used to attack computer systems maliciously. Botnets are virtual networks of zombies created by attackers who use bot programs to remotely control susceptible computers. Botnets can be used to conduct coordinated attacks against other computing resources, such as targeted distributed denial of service (DDoS) attacks. The emergence of bot malware has been distinguished by a shift in motive from curiosity and fame-seeking to illegal financial gain. A person-in-the-middle attack is a type of cyber-attack in which the attacker intercepts communication between two parties (the user and the application) in order to obtain information or data. This type of attack can be difficult to detect and the primary objective is to steal sensitive information like login credentials, personal information, and financial details. Let’s take a look at what a person-in-the-middle attack is. There are three types of methods that an organization uses to secure its infrastructure: Red Teaming, Blue Teaming, and Purple teaming. Each concept takes a different strategy for safeguarding the organization. Let’s take a look at what red, blue, and purple teaming is all about. A denial of service attack is an attempt to prevent intended users from accessing a system or network resource. Denial of service attacks are frequently used to target a specific person or group, but they can be used against anybody who uses the Internet. A denial of service attack can be carried out in a number of methods, but the most frequent is to flood a targeted system with requests. There are a few things to check for if you suspect your computer has been infected with malware. To begin, look for any strange or unexpected conduct. If your computer becomes noticeably slower or apps crash for no apparent reason, this might be an indication of infection. Another red flag is the appearance of additional toolbars or icons that you did not install, or if your home page has been modified without your consent. The best thing to do if you suspect malware is to perform a scan using a trusted anti-virus application. This will help in the detection and removal of any dangerous software that may be present on your PC. Malicious or anomalous activities can occur on a system at any time, making the presence of intrusion detection systems critical. The job of an intrusion detection system is to detect suspicious activities, while monitoring a system or network and analyzes data to identify potential incidents. Intrusion detection systems can be host based or network based. There are many ways that malware can be delivered to a system. Some common methods are through email attachments, downloading infected files from the internet, and running infected programs. Malware can also be delivered through exploit kits that take advantage of vulnerabilities in programs and operating systems. Once malware is on a system, it can spread to other systems and devices on the network. Malware is a broad term that refers to any software with a malicious intent. Malware comes in a variety of forms, each with its own method of infecting your computer. These methods may include attempting to obtain unauthorized control of your computer systems, stealing personal information, encrypting critical information, or causing other harm to your computer. Damage can sometimes be irreversible. Email attachments, infected websites, torrents, and shared networks are all popular malware sources. The concept of “kill chain” is used in the cybersecurity industry to describe how attackers get into a system and accomplish their goals. Cyber security professionals can implement countermeasures and defend their systems, by understanding how attackers can hack a system successfully. Understanding what cyber risks exist is the first step in preventing them. Cyber threats can take various forms and include any type of threat that uses technology to harm people or organizations. In this article, we introduce three types of security controls that can protect individuals and organizations from cyber attacks. In information security, there are several types of threat actors. Some are motivated by money, others by political or ideological motivations, whereas others by a desire to harm others. The most prevalent sort of threat actor is the profit-driven criminal. There are, however, numerous state-sponsored actors who are frequently driven by political or ideological motivations. These state-sponsored actors may be very dangerous since they have resources that most criminals do not. A hacker is a person who uses technical expertise to gain unauthorized access to computer systems or data. Hackers may do this for a variety of purposes, including stealing sensitive information, causing damage or disruption, or simply playing around with the system. Some hackers work alone, while others may be part of a wider organization. Hacking can be accomplished by a variety of methods, including as programming code that exploits a system vulnerability, guessing passwords, or physically accessing the system’s hardware. There are five main types of hackers, which will be described in this article. Any form of a malicious attack on an electronic device or system is referred to as a “cyber security threat.” There are several forms of cyber security threats, but the most prevalent include malware, system failures, unauthorized access, and social engineering. Cyber security risks may be harmful to both individuals and organizations; thus, it is critical to understand the various forms of dangers that exist. Understanding the various forms of cyber security dangers available allows you to better plan to defend yourself and your assets from them. An organization must employ the three “A’s” of security to keep our computer systems and data safe: authentication, authorization, and accounting. Authentication is the process of verifying that someone is who they say they are. The process of ensuring that someone has the necessary authority to access a certain resource is known as authorization. Accounting is the process of documenting and tracking all system activity. If we apply all three of these security standards, we can assure that our systems are safe from unwanted access and misuse. In today’s interconnected world, a strong understanding of cyber security is essential for individuals and businesses alike. There are many cyber disciplines, but some of the most important include vulnerability management, incident response, forensics, security architecture, security engineering, and governance, risk and compliance (GRC). By understanding these key concepts, you can better protect yourself and your organization from online threats. Cyber security has become a significant priority for both organizations and people in recent years. The potential for cyberattacks has grown exponentially as people’s reliance on technology and the internet has grown. The technique of securing computer systems and networks from illegal access or theft is known as cyber security. This can be achieved through a variety of tools, including firewalls, encryption, and intrusion detection. The CIA Triad plays an essential role in cyber security. The Triad is an acronym that stands for confidentiality, integrity, and availability. All three principles are essential to the security of information and systems. Confidentiality ensures that information is not disclosed to unauthorized individuals or entities. Integrity ensures that information is not altered or destroyed in an unauthorized manner. Availability ensures that authorized users have access to information and systems when they need them. The Triad is significant because it helps organizations secure their data from unauthorized access and modification. A zero-day vulnerability is a computer security flaw that is unknown to the general public and vendors until it is actively exploited and caught in the wild. Zero-days or 0-days are highly sought after by threat actors because they are highly effective at obtaining initial access on a target system.
<urn:uuid:d9adfb27-fa76-4b5f-91e6-bc82ffd86e7d>
CC-MAIN-2022-40
https://blog.mosse-institute.com/fundamental-concepts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00313.warc.gz
en
0.942956
15,393
2.859375
3
What is ransomware? Ransomware is a malicious type of program that locks your computer, tablet, or smartphone — or encrypts your files and then demands ransom for their safe return. There are essentially two types of ransomware. The first type is cryptors, which encrypt files so to make them inaccessible. Decrypting the files requires the key used to encrypt them — that’s what the ransom pays for. The other type is called blockers; they simply block a computer or other device, rendering it inoperable. Blockers actually represent a better-case scenario than cryptors; victims stand a better chance of restoring blocked access than encrypted files. How much is the usual ransom? There really is no “usual.” However $300 that’s the average ransom extortionists ask their victims to pay to restore access to encrypted files or locked computers. But some ransomware programs ask for as little as $30. Some demand tens of thousands of dollars. Enterprises and other big organizations, which usually get infected through spear phishing, are more likely to receive higher ransom demands. However, you should keep in mind that paying the ransom doesn’t ensure the safe and reliable return of files. Can I decrypt the encrypted files without paying ransom? Sometimes. The majority of ransomware programs use resilient crypto algorithms, which means that without an encryption key, decrypting them could take years. Sometimes the criminals behind ransomware attacks make mistakes, enabling law enforcement to seize attack servers containing encryption keys. When that happens, the good guys are able to develop a decryptor. How is ransom paid? Usually, ransom is requested in cryptocurrency, namely bitcoins. This electronic currency cannot be forged. The history of transactions is available to anyone, but the owner of the wallet can’t easily be tracked. That’s why cybercriminals prefer bitcoins: They improve the odds of not getting caught. Some types of ransomware use anonymous online wallets or even mobile payments. The most surprising method we have seen to date was $50 iTunes cards. How does ransomware end up on my computer? The most common vector is e-mail. Ransomware may pose as a useful or important attachment (an urgent invoice, an interesting article, a free app). Once you open the attachment, your PC is infected. Ransomware can infiltrate your system while you’re just surfing the Internet, however. To gain control over your system, extortionists use OS, browser, or app vulnerabilities. That’s why it’s crucial you keep your software and operating system up to date (by the way, you can delegate this task to Kaspersky Internet Security or Kaspersky Total Security, whose latest versions automate the process). Some ransomware programs can self-propagate through local networks. If such a Trojan infects one machine or device in your home or enterprise network, other endpoints will also eventually get infected. But that is a rare case. Of course, there are more predictable infection scenarios. You download a torrent, then you install a plugin…and away we go. What kind of files are the most dangerous? Another dangerous file category is MS Office files (DOC, DOCX, XLS, XLSX, PPT, and so forth). They may contain vulnerable macros; if you are prompted to enable macros in a Word document, think twice before you do it. Be wary of shortcut files (.LNK extension) as well. Windows can depict them with any icon, which, paired with an innocent-looking file name, can lure you into trouble. An important note: Windows opens files with known extensions without prompting the user, and by default it hides those extensions in Windows Explorer. So if you see a file named something like Important_info.txt, it could actually be Important_info.txt.exe, a malware installer. Set Windows to show extensions for greater security. Can I avoid infection if I stay away from rogue websites or suspicious attachments? Unfortunately, even cautious users can get infected with ransomware. For example, it’s possible to infect your PC while reading news on a big, reputable news website. Of course, the website itself won’t distribute malware to visitors — unless it’s hacked, which is another story. Instead, advertising networks compromised by cybercriminals serve as distributors, and simply having an unpatched vulnerability lets malware load. Here again, having up-to-date software and a fully patched operating system are key. I have a Mac, so I don’t need to worry about ransomware, right? Macs can be and have been infected with ransomware. For example, KeRanger ransomware, which infiltrated the popular Transmission torrent client, hit Mac users. Our experts believe that the number of ransomware programs targeting Apple systems will gradually increase. And with Apple devices being relatively expensive, extortionists may find Mac owners a great target for higher ransom demands. Some types of ransomware even target Linux. No systems are safe from this threat. I use my phone to go online. Do I have to worry about ransomware on Android? So, even iPhones are at risk? To date, there are no dedicated ransomware programs for iPhone and iPad. That statement refers to iPhones that are not jailbroken, by the way. Malware can infiltrate devices that aren’t bound by the security restrictions of iOS and Apple’s locked-down App Store. iPhone ransomware might be just around the corner, however, and not requiring a jailbroken system. We might see the emergence of IoT ransomware as well. Cybercriminals might demand high ransoms after taking over a smart TV or fridge. How will I know if my computer gets infected with ransomware? Ransomware isn’t subtle. It will announce itself, like this: Blockers look more like this: Which ransomware types are the most prevalent? New types of ransomware emerge every day, so it’s hard to say which are the most popular. We can enumerate several outstanding examples, such as Petya, which encrypts the entire hard drive. Also, there is СryptXXX, which is still powerful and which we took down twice. And, of course, TeslaCrypt was the most pervasive sample of ransomware for the first four months of 2016; its creators, unexpectedly, were the ones to publish a master key. If I get infected, how to remove the ransomware? If you find your computer blocked — it won’t load the operating system — use Kaspersky WindowsUnlocker, a free utility that can remove a blocker and get Windows to boot. Cryptors are a harder nut to crack. First, you need to get rid of the malware by running an antivirus scan. If you don’t have a proper antivirus on your computer, you can download a free trial version here. The next step is to get your files back. If you have a backup copy of your files, you can simply restore your files from the backup. That is by far your best shot. If you haven’t made backups, you can try to decrypt files by using special utilities called decryptors. All of the free decryptors created by Kaspersky can be found at Noransom.kaspersky.com. Other antivirus companies also develop decryptors. One thing: Be very sure you’re downloading these programs from a reputable website; otherwise you run a high risk of getting infected by some other malware. If you can’t find the right decryptor, you can pay the ransom or say good-bye to your files. That said, we don’t recommend paying the ransom. Why not just pay the ransom? For starters, there is no guarantee you will get your files back. You cannot trust extortionists. One example of untrustworthy thieves is the makers of Ranscam, ransomware that didn’t even bother with encrypting but simply deleted the files (although of course it promised decryption in exchange for money). According to our research, 20% of ransomware victims who paid never got their files back. I found the ransomware decryptor I need; why doesn’t it work? Ransomware developers are quick to react when a new decryptor comes out, and they respond by modifying their malware to make it resilient to the available decryptor. It’s a game of whac-a-mole. Unfortunately, decryptors do not come with guarantees. If I spot a malicious process, is there something I can do to stop the ransomware infection? In theory, if you catch it in time, you can turn off the PC, remove the hard drive, insert it into another computer, and use that computer’s antivirus to disinfect. However, in real life it’s difficult or even impossible for a user to detect an infection; ransomware works quietly until the big reveal: the ransom note. Is antivirus enough to avoid infection? Yes, in the majority of cases. The antivirus solution you use matters, though. According to independent benchmarks by renowned labs (which are, in fact, the only benchmarks to trust), Kaspersky products offer better protection than the competition. However, no antivirus is 100% effective. In many cases, automatic detection depends on how recent the malware is. If its signatures have not been added to antivirus databases, a Trojan can be detected with behavioral analysis. If it attempts to inflict damage, it’s blocked immediately. Our product includes a module called System Watcher; if it detects an attempt of massive file encryption, it blocks the malicious process and rolls back all changes. Please never disable this component. If I back up my files regularly, am I safe? Backing up your files is very helpful, without a doubt, but it is not a 100% guarantee. Here’s one case: You set automatic backup on your spouse’s computer to run every three days. A cryptor infiltrates the system, encrypting all documents, photos, and so forth — but he does not get the gravity of the situation at once. So when you check in a week later, the backups are all encrypted, too. Backups are vitally important, but your defenses need to go further. Are there any settings I can tweak to strengthen defenses? a. First, do install an antivirus. But we have already told you that, haven’t we? c. Make file extensions visible in Windows Explorer. d. Make Notepad the default application for VBS and JS files. Windows usually marks dangerous VBS and JS scripts as text files, which can mislead less-savvy users into opening them. e. Consider enabling Kaspersky Internet Security’s Trusted Applications Mode, thus restricting installation of any nonwhitelist programs. It is not enabled by default and requires some tweaking and setting up, but it’s a very useful tool, especially for those who are not PC proficient and might let some sneaky malware get into the system. Download Kaspersky Total Security to avoid any ransomware attack in future
<urn:uuid:f7044382-8a67-43c1-a6b1-7d8a4dbbc387>
CC-MAIN-2022-40
https://noransom.kaspersky.com/en/faq/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00313.warc.gz
en
0.90855
2,379
2.84375
3
C was originally developed by Dennis Richie at Bell during the late 60s and early 70s. It emerged during the development of the Unix operating system which—like C—has survived the test of time, especially in its Linux incarnation. C became immediately popular because it balanced power and usability. At the time of C’s inception, low-level “close-to-the-metal” coding required tedious Assembly language, while business processing typically used the verbose COBOL language, which was far slower and less capable of operating-system level computation. C offered a higher-level abstraction than assembly language, but still allowed direct manipulation of memory structures through pointers. C provided a huge boost in programmer productivity for system and scientific programming and for high speed computing. C++ emerged from the C language foundation, becoming the first widely used object oriented programming languages. Although Java, C# and other similar “managed” languages have become fast enough to offer an alternative to C for high speed computing, most operating systems and compilers remain written in C or C++. However, C is not without blemishes. In James Iry’s hilarious “history of computing,” he wrote, “Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.” C, while powerful, empowered the programmer to create bugs that simply were not possible with higher level languages. Most critically, C’s pointer-based memory management system could create catastrophic failures as well as security issues such as buffer overflow attacks. And the C++ object-oriented idioms are considered generally inferior to those in Java or Python. Therefore, it’s not surprising that there have been attempts to create a C replacement language. The aptly named D programming language was first released in 2001, but did not achieve a 1.0 release until 2007. It borrowed much of C++ syntax but added support for functional programming, parallelism and concurrent programming. However, D has experienced relatively poor adoption. Around 2007 a team of engineers at Google—including Ken Thompson, who worked on UNIX and C with Richie—developed the Go programming language. Go is a C-like language with improvements for readability, productivity and performance—especially around parallelism and concurrency. Similar to Java and many dynamic languages, Go performs garbage collection which can lead to some unpredictability in response time and possible problems for real-time applications. Nevertheless, Go is widely adopted inside of Google and externally. It is fast to execute and fast to compile, and it has been battle-hardened within Google. Rust was developed around the same time at Mozilla. Similar to Go, Rust is syntactically a C-like language, but where Go emphasizes speed, Rust is oriented more toward memory safety and predictable performance. Rust performs no garbage collection and emphasizes manual memory management. Similar to Go, Rust is explicitly designed to support concurrent programming tasks. Dropbox recently rewrote part of their Go-based “magic pocket” system in Rust to reduce the memory footprint of key components. Go and Rust don’t represent the only leading edge of language development. However, most other emerging languages are JVM based—such as Kotlin and Scala—or dynamic languages which optimize programmer productivity at the expense of ultimate run time performance. Where runtime performance considerations are paramount, Go and Rust are emerging as valid successors to C.
<urn:uuid:8e683927-c80b-44f5-8c83-5c85bfd5b5b4>
CC-MAIN-2022-40
https://www.dbta.com/Columns/Emerging-Technologies/Go-and-Rust-Emerging-as-Successors-to-C-118539.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00313.warc.gz
en
0.952094
802
3.546875
4
Schools, colleges and universities are attractive targets to cybercriminals because of the amount of personal data they collect, process and store. This personal data can include residential addresses, social security numbers and bank account information. Colleges and universities, specifically, also have troves of detailed information about current research efforts that attackers may want to get their hands on. To gain unauthorized access to data or money, cybercriminals frequently target students, parents and faculty by impersonating educational institutions and sending out fake emails that ask for login credentials, personal information or money transfers. These attackers rely on the authority of educational organizations to make recipients more likely to comply quickly. Successful phishing campaigns of this nature can lead to financial and reputational loss as well as serious legal action. Enforcing a DMARC policy helps prevent your organization from being spoofed in phishing attacks, ensuring that students, parents and employees only see emails that authentically come from you. As an added bonus, DMARC increases email deliverability to enable streamlined communication. DMARC (Domain-based Message Authentication, Reporting and Conformance) helps protect your organization by providing visibility into active threats using your domains, thus giving you the ability to stop them before they are delivered. DMARC is an email validation system used to protect an organization’s email channel from spoofing, phishing scams and other email-borne attacks. Established by Google, Yahoo!, Microsoft and others in 2012, DMARC builds on existing email authentication techniques SPF and DKIM to strengthen your domain’s fortifications against fraudulent use. DMARC is the best way for email senders and receivers to determine if a given message is authentically from the sender and decide what to do if it is not. It also helps improve your educational organization’s email deliverability to the inbox, meaning you can reach more people more often. The educational sector is not on its A game when it comes to protecting against spoofing. A study of the top 200 U.S. schools in the 2020 WSJ/THE College Rankings found that only six schools had DMARC deployed and set to block suspicious email — that’s only 3%. Of those 200 colleges and universities, 58% did not have a DMARC record in place at all. Cybercriminals have taken note and repeatedly exploited schools’ vulnerabilities. In 2019, Oregon University lost personal information relating to over 600 students when an employee fell for a phishing scam. In the same year, a phishing scam tricked Wichita State University employees into handing over login credentials, which subsequently enabled cybercriminals to access other employees’ banking information and steal money. Student loan scams are also increasingly common. While colleges and universities are institutions of learning and academic advancement, they are also businesses that garner attention from cybercriminals for their valuable assets. Students, parents and faculty members look to schools as a place of authority and trust. To maintain that trust, it’s crucial to secure all channels of communication. DMARC empowers your school, college or university to take control of its email domain while experiencing the following benefits: Online brand protection: Educational organizations are common targets for cybercriminals to impersonate for malicious purposes. DMARC protects your brand’s integrity by keeping your organization out of their arsenal of easily spoof-able email domains. Increased email deliverability: By deploying DMARC authentication, you signal to email receivers that your organization’s emails are legitimate, ensuring they’re delivered to the inbox rather than blocked or sent to the spam folder. A published policy that instructs ISPs and other email receivers to deliver, quarantine or delete emails: With DMARC, you can decide if potential abuses of your email domain are solely reported back to you without further action, quarantined for further review or — the golden standard — automatically rejected. Greater visibility into cyber threats: DMARC’s reporting capability enables you to monitor all authorized third parties that send emails on your behalf, alongside those that are not authorized. This helps ensure compliance with security best practices and aids investigations into email security or phishing issues. Email is the backbone of professional and educational communication. Unfortunately, it’s also the starting point for 95% of cyberattacks. Though cybersecurity technologies have made great advancements, it has been historically difficult to remedy an inherent security weakness that came with the democratization of email: anyone can create an email account and send emails under a false identity as well as a false domain. Countless reputable educational institutions have been exploited by criminals to execute phishing and BEC attacks. Any association with criminal phishing campaigns can be devastating for a school — especially when it could have been prevented by enforcing stricter security standards like DMARC. Now that you understand why DMARC is important for your educational organization, let’s get started on the how to ace DMARC deployment. To achieve maximum return on your DMARC investment, educational institutions must complete the necessary steps to correctly implement DMARC. Domain owners must kick-off and manage a DMARC project which includes discovering all of your owned domains, learning what legitimate services are sending email on your behalf, properly configuring those services from an SPF and DKIM perspective, and of course, publishing a DMARC record (try our DMARC Record Generator). DMARC Analyzer offers different levels of tailored services to help guide your organization through the process. Though DMARC is a key part of any cybersecurity program, it is not a standard that can be deployed, configured, activated, and then forgotten. It’s imperative that after DMARC has been successfully implemented – set to “reject” for all of your domains – that your organization establishes a program of ongoing monitoring, as DMARC is not a set-it-and-forget-it standard. In 2020, Mimecast embarked on its own journey to use DMARC across all of our owned domains. The project was documented in a three-part blog series for other organizations to use as a resource.
<urn:uuid:9c9334d5-a3ec-4787-8e78-1268541afe1e>
CC-MAIN-2022-40
https://www.dmarcanalyzer.com/dmarc-educational-organizations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00513.warc.gz
en
0.945083
1,224
2.703125
3
Whatever threat develops, however, an essential first step in the defence will be to ensure that IP telephony networks are correctly configured and protected. Where this is the responsibility of in-house support staff, it’s important they are properly trained and that their skills are kept up to date. Alternatively, it’s essential to ensure the service supplier’s staff are fully equipped to keep the network operating reliably and securely. While much of the technology that underlies IP telephony may appear familiar to those who support data networks, and should benefit from the same sorts of protection, there are notable differences. For example, while those who manage data networks have traditionally seen their priority as maintaining integrity and confidentiality, those running phone systems have focused on delivering high levels of availability. Because factors such as delay are critically important to IP telephony services, those employed to manage and support such networks will need skills above and beyond those normally required for work on data networks. Those used to managing and operating traditional phone systems will also find the change to IP telephony challenging. Their experience is of mature technologies supplied as ‘black box’ solutions connected together using well-established interface standards and network services. In contrast, while now well beyond the early adopter phase, VoIP is still evolving, and the design and integration of IP telephony systems is more complex. To successfully design and operate an IP telephony system, you require an in-depth understanding not only of data communications and VoIP technologies but of the interactions between them. Problems can arise, for example, when new computing applications are added to a network without confirming their ability to coexist with IP telephony systems. The resulting interference can have a dramatic impact on the system performance. The most basic precaution is therefore to ensure that IP telephony systems and associated data networks are designed and maintained only by suitably qualified staff. Beyond this, because the components of an IP telephony system make extensive use of computer hardware and software, they require the same sorts of protection as traditional computer installations. For example, viruses could exploit weaknesses in the underlying operating systems and in application programmes. Anti-virus solutions will therefore be required, but these must be designed so as not to introduce excessive delay as telephony packets move through the network. It is also important to keep them up to date. Security sources should be monitored for details of new forms of attack and support staff should register to receive security alerts and software updates directly from vendors whenever these are available. Customers that hold support contracts, for example, will usually be informed of any action they should take to protect their installations.
<urn:uuid:67a469a6-a9e5-498d-ae75-74d0a17ecec8>
CC-MAIN-2022-40
https://it-observer.com/voip-security1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00513.warc.gz
en
0.948859
535
2.75
3
Many financial institutions use algorithms to make high-risk decisions involving large sums of money — perhaps the highest vote of confidence that any business can give. So what can we learn exactly? copyright by www.forbes.com Today, there is a lot of debate about the use of algorithms within business settings. Some feel that their use makes it easier for businesses to systematically shed employees; others feel that businesses put a lot of trust into what they consider to be a black box, unknowable and unaccountable. Yet all of this discussion seems to ignore the fact that one of the first industries to begin strategically using algorithms, the financial industry, is also one that is notoriously risk-averse when it comes to adopting new technologies. In fact, many financial institutions use algorithms to make high-risk decisions involving large sums of money — perhaps the highest vote of confidence that any business can give. As the CEO of a company that uses deep learning to create customized marketing algorithms for players in all industries, including finance, I have seen firsthand the impact that an algorithm can have when helping businesses reach their goals and expand their growth. It is no exaggeration to say that algorithms have been a game-changer for the financial industry. They’ve not only made it easier for big institutions to make money, but they’ve also made it possible for individual players to make a name for themselves. Their successes are a testament to the power of machine learning, and they are an example for other industries looking to reinvigorate themselves. How Algorithms Have Transformed the Financial Industry The first quantitative hedge funds appeared on the scene in the 1980s, and their influence has only grown since then. One of the most common ways for banks to use algorithms involves setting the parameters for a trade to occur. Traders can create an algorithm that directs the system to purchase a stock when it reaches a certain price or sell if it falls by a certain percentage. While these algorithms are not always powered by machine learning, they are relatively common within the trading world, and they can even be used by those looking to invest in stocks on their own. More sophisticated versions of these algorithms can incorporate machine learning to take into account any factors that might affect a stock’s price, from world events to changing trends. For instance, last year, JP Morgan launched what it calls its “Deep Neural Network for Algo Execution.” It’s a neural network that combines its existing foreign exchange algorithms into one highly optimized bundle. While financial institutions use machine learning for its data analysis capabilities to identify potential opportunities in the market, they still often leave it up to humans to choose which opportunities to ultimately pursue. But as the Economist points out, there is also another way that machine learning can be used: to design new investment strategies from scratch, without having to take into account human preferences or prejudices. It used to be that humans would use algorithms to test an existing hypothesis; now, as one investor says, “We start with the data and look for a hypothesis.” In the financial markets as they exist today, a significant percentage of assets are either being traded by computers without any human input or managed by them. As reported by the Economist, Deutsche Bank estimates that 80% of cash-equity trades and 90% of equity-futures trades are carried out by algorithms. This is astonishing if you think about the sheer volume of trades that are carried out each day. In other words, you would be hard-pressed to find a financial institution that does not use algorithms in some way that could directly impact how much money the business makes on behalf of itself and its clients.[…]
<urn:uuid:61c02fe0-8344-474f-8d13-68acf50780ca>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/05/30/what-businesses-can-learn-about-algorithms-from-the-financial-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00513.warc.gz
en
0.961493
749
2.671875
3
Formal Risk Analysis Structures: OCTAVE and FAIR Within the industrial environment, there are a number of standards, guidelines, and best practices available to help understand risk and how to mitigate it. IEC 62443 is the most commonly used standard globally across industrial verticals. It consists of a number of parts, including 62443-3-2 for risk assessments, and 62443-3-3 for foundational requirements used to secure the industrial environment from a networking and communications perspective. Also, ISO 27001 is widely used for organizational people, process, and information security management. In addition, the National Institute of Standards and Technology (NIST) provides a series of documents for critical infrastructure, such as the NIST Cybersecurity Framework (CSF). In the utilities domain, the North American Electric Reliability Corporation’s (NERC’s) Critical Infrastructure Protection (CIP) has legally binding guidelines for North American utilities, and IEC 62351 is the cybersecurity standard for power utilities. The key for any industrial environment is that it needs to address security holistically and not just focus on technology. It must include people and processes, and it should include all the vendor ecosystem components that make up a control system. In this section, we present a brief review of two such risk assessment frameworks: OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation) from the Software Engineering Institute at Carnegie Mellon University FAIR (Factor Analysis of Information Risk) from The Open Group These two systems work toward establishing a more secure environment but with two different approaches and sets of priorities. Knowledge of the environment is key to determining security risks and plays a key role in driving priorities. OCTAVE has undergone multiple iterations. The version this section focuses on is OCTAVE Allegro, which is intended to be a lightweight and less burdensome process to implement. Allegro assumes that a robust security team is not on standby or immediately at the ready to initiate a comprehensive security review. This approach and the assumptions it makes are quite appropriate, given that many operational technology areas are similarly lacking in security-focused human assets. Figure 8-5 illustrates the OCTAVE Allegro steps and phases. Figure 8-5 OCTAVE Allegro Steps and Phases (see https://blog.compass-security.com/2013/04/lean-risk-assessment-based-on-octave-allegro/). The first step of the OCTAVE Allegro methodology is to establish a risk measurement criterion. OCTAVE provides a fairly simple means of doing this with an emphasis on impact, value, and measurement. The point of having a risk measurement criterion is that at any point in the later stages, prioritization can take place against the reference model. (While OCTAVE has more details to contribute, we suggest using the FAIR model, described next, for risk assessment.) The second step is to develop an information asset profile. This profile is populated with assets, a prioritization of assets, attributes associated with each asset, including owners, custodians, people, explicit security requirements, and technology assets. It is important to stress the importance of process. Certainly, the need to protect information does not disappear, but operational safety and continuity are more critical. Within this asset profile, process are multiple substages that complete the definition of the assets. Some of these are simply survey and reporting activities, such as identifying the asset and attributes associated with it, such as its owners, custodians, human actors with which it interacts, and the composition of its technology assets. There are, however, judgment-based attributes such as prioritization. Rather than simply assigning an arbitrary ranking, the system calls for a justification of the prioritization. With an understanding of the asset attributes, particularly the technical components, appropriate threat mitigation methods can be applied. With the application of risk assessment, the level of security investment can be aligned with that individual asset. The third step is to identify information asset containers. Roughly speaking, this is the range of transports and possible locations where the information might reside. This references the compute elements and the networks by which they communicate. However, it can also mean physical manifestations such as hard copy documents or even the people who know the information. Note that the operable target here is information, which includes data from which the information is derived. In OCTAVE, the emphasis is on the container level rather than the asset level. The value is to reduce potential inhibitors within the container for information operation. In the OT world, the emphasis is on reducing potential inhibitors in the containerized operational space. If there is some attribute of the information that is endemic to it, then the entire container operates with that attribute because the information is the defining element. In some cases this may not be true, even in IT environments. Discrete atomic-level data may become actionable information only if it is seen in the context of the rest of the data. Similarly, operational data taken without knowledge of the rest of the elements may not be of particular value either. The fourth step is to identify areas of concern. At this point, we depart from a data flow, touch, and attribute focus to one where judgments are made through a mapping of security-related attributes to more business-focused use cases. At this stage, the analyst looks to risk profiles and delves into the previously mentioned risk analysis. It is no longer just facts, but there is also an element of creativity that can factor into the evaluation. History both within and outside the organization can contribute. References to similar operational use cases and incidents of security failures are reasonable associations. Closely related is the fifth step, where threat scenarios are identified. Threats are broadly (and properly) identified as potential undesirable events. This definition means that results from both malevolent and accidental causes are viable threats. In the context of operational focus, this is a valuable consideration. It is at this point that an explicit identification of actors, motives, and outcomes occurs. These scenarios are described in threat trees to trace the path to undesired outcomes, which, in turn, can be associated with risk metrics. At the sixth step risks are identified. Within OCTAVE, risk is the possibility of an undesired outcome. This is extended to focus on how the organization is impacted. For more focused analysis, this can be localized, but the potential impact to the organization could extend outside the boundaries of the operation. The seventh step is risk analysis, with the effort placed on qualitative evaluation of the impacts of the risk. Here the risk measurement criteria defined in the first step are explicitly brought into the process. Finally, mitigation is applied at the eighth step. There are three outputs or decisions to be taken at this stage. One may be to accept a risk and do nothing, other than document the situation, potential outcomes, and reasons for accepting the risk. The second is to mitigate the risk with whatever control effort is required. By walking back through the threat scenarios to asset profiles, a pairing of compensating controls to mitigate those threat/risk pairings should be discoverable and then implemented. The final possible action is to defer a decision, meaning risk is neither accepted nor mitigated. This may imply further research or activity, but it is not required by the process. OCTAVE is a balanced information-focused process. What it offers in terms of discipline and largely unconstrained breadth, however, is offset by its lack of security specificity. There is an assumption that beyond these steps are seemingly means of identifying specific mitigations that can be mapped to the threats and risks exposed during the analysis process. FAIR (Factor Analysis of Information Risk) is a technical standard for risk definition from The Open Group. While information security is the focus, much as it is for OCTAVE, FAIR has clear applications within operational technology. Like OCTAVE, it also allows for non-malicious actors as a potential cause for harm, but it goes to greater lengths to emphasize the point. For many operational groups, it is a welcome acknowledgement of existing contingency planning. Unlike with OCTAVE, there is a significant emphasis on naming, with risk taxonomy definition as a very specific target. FAIR places emphasis on both unambiguous definitions and the idea that risk and associated attributes are measurable. Measurable, quantifiable metrics are a key area of emphasis, which should lend itself well to an operational world with a richness of operational data. At its base, FAIR has a definition of risk as the probable frequency and probable magnitude of loss. With this definition, a clear hierarchy of sub-elements emerges, with one side of the taxonomy focused on frequency and the other on magnitude. Loss even frequency is the result of a threat agent acting on an asset with a resulting loss to the organization. This happens with a given frequency called the threat event frequency (TEF), in which a specified time window becomes a probability. There are multiple sub-attributes that define frequency of events, all of which can be understood with some form of measurable metric. Threat event frequencies are applied to a vulnerability. Vulnerability here is not necessarily some compute asset weakness, but is more broadly defined as the probability that the targeted asset will fail as a result of the actions applied. There are further sub-attributes here as well. The other side of the risk taxonomy is the probable loss magnitude (PLM), which begins to quantify the impacts, with the emphasis again being on measurable metrics. The FAIR specification makes it a point to emphasize how ephemeral some of these cost estimates can be, and this may indeed be the case when information security is the target of the discussion. Fortunately for the OT operator, a significant emphasis on operational efficiency and analysis makes understanding and quantifying costs much easier. FAIR defines six forms of loss, four of them externally focused and two internally focused. Of particular value for operational teams are productivity and replacement loss. Response loss is also reasonably measured, with fines and judgments easy to measure but difficult to predict. Finally, competitive advantage and reputation are the least measurable.
<urn:uuid:4ee65bf2-293d-4807-ae3d-7b96f9f0a285>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2803867&seqNum=4
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00513.warc.gz
en
0.946782
2,084
2.8125
3
With the development of wavelength-division multiplexing (WDM) technology, the network traffic volume is increasing and the demand for more network bandwidth is also on the rise. By converting the operating wavelength of the incoming bitstream to an ITU-compliant wavelength, WDM transponder serves as a key component in WDM system. As an important technology in the fiber optical network, WDM is moving beyond transport to become the basis of all-optical networking. And how to optimize WDM network has always been a hot topic. The transponder is a device to optimize the performance of WDM network, which plays an important in the whole system of WDM network. This article will introduce you the information on WDM transponders. Also called as an OEO (optical-electrical-optical) transponder, a WDM transponder unit is an optical-electrical-optical wavelength converter, which has been widely adopted in a variety of networks and applications. The picture below shows us how a bidirectional transponder works. In this picture, the transponder is located between a client device and a DWDM system. And we can see clearly that, from left to right, the transponder receives an optical bitstream operating at one particular wavelength (1310 nm), and then converts the operating wavelength of the incoming bitstream to an ITU-compliant wavelength and transmits its output into a DWDM system. On the receive side (right to left), the process is reversed. The transponder receives an ITU-compliant bitstream and converts the signals back to the wavelength used by the client device. According to its function, the application of WDM transponders can be classified into the following types. - Wavelength Conversion. It is known to us that when a CWDM Mux/Demux or DWDM Mux/Demux is added into a WDM network, there is a requirement to convert optical wavelengths like 850nm, 1310nm and 1550nm to CWDM or DWDM wavelengths. Then the OEO transponder comes to assist. The OEO transponder receives, amplifies and re-transmits the signal on a different wavelength without changing the signal content. - Fiber Mode Conversion. Multimode fiber optic cables (MMF) are often used in short distance transmission, while single-mode fiber optic cables (SMF) are applied in long optical transmission. Therefore, in some network deployment, considering the transmission distances, MMF to SMF or SMF to MMF conversions are needed. WDM transponders can convert both multimode fiber to single-mode fiber and dual fiber to single fiber. - Signal Repeating. In long haul fiber optic transmission, WDM transponder also can work as repeaters to extend network distance by converting wavelengths (1310nm to 1550nm) and amplifying optical power. The OEO converters convert the weak optical signals from the fiber into electrical signals, and regenerate or amplify, then recover them into strong optical signals for continuous transmission. At FS, OEO transponders are made into small plug-in cards to be used on the FMT platform. FMT platform makes devices like EDFA, OEO, DCM, OLP and VOA into plug-in cards and provides standard rack units as well as free software to achieve better management and monitoring. In addition, FMT series products like OEO, DCM and OLP also have higher performance than that of old ones. FMT series OEO transponder can convert optical signals into DWDM wavelengths, reducing the fault risk caused by high power consumption of DWDM fiber optic transceiver. Since the OEO transponder is made into small plug-in card in the FMT platform, it only occupies one slot in the special designed chassis when installed, thus saving a lot of space. In addition, all these FMT plug-in cards, including OEO, in a rack unit share the same power source and support hot plug & play operation. And they can be inserted or removed flexibly in the racks for DWDM networking. Since the OEO or WDM transponder plays an important role in WDM network, such as receiving, amplifying and re-transmitting the signal on a different wavelength, adding an OEO transponder into the WDM network is very essential. The OEO transponders in our FMT series are made into small plug-in cards with high quality to ensure good transmission performance. For more information on our FMT system, please visit www.fs.com. Related Article: The Versatile Fiber Optic Transponder (OEO) in WDM System
<urn:uuid:21b3e40e-f442-4657-b899-26e7faf6ab45>
CC-MAIN-2022-40
https://www.fiber-optic-tutorial.com/tag/oeo-converter
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00513.warc.gz
en
0.931053
991
2.71875
3
Website security begins with having a confirmed identity of the website owner to prevent phishing attacks. Without it, online users are at a major disadvantage against identity fraudsters with fake domain validation phishing sites that imitate high-value sites to steal passwords and credit card numbers. The genesis of the London Protocol, an initiative to improve identity assurance and minimize the possibility of phishing activity, rests on data presented by multiple sources indicating that anonymous domain validation SSL/TLS certificates are the principal reason for a recent rise in phishing attacks, along with our collective interest in preserving secure Internet transactions to protect both organizations and the user community who transacts with them. The London Protocol's primary focus is to improve identity assurance and minimize the possibility of phishing activity on websites encrypted with organization validated (OV) and extended validation (EV) certificates, which contain verified organization identity information (Identity Certificates) to tell users they will be safer at those sites. We chose the name "London Protocol" because we officially announced the agreement at the most recent face-to-face meeting of the Certificate Authority Security Council/Browser Forum in London last month. The genesis of our action stemmed from a report from HashedOut noting that "between January 1st, 2016 and March 6th, 2017, the Let's Encrypt certificate authority issued a total of 15,270 SSL certificates containing the word 'PayPal.'" These Let's Encrypt certificates were issued to bad actors who used the name "PayPal" in their domains to trick online users into sending their personal data — in other words, to commit identity theft. The certificates issued by Let's Encrypt are solely domain-validated certificates, which means that they can be issued to anonymous websites because issuance is 100% automated. Identity Certificates: A Brief History Back in 2001, only OV identity certificates were used to secure websites. For most CAs, obtaining an OV certificate was a detailed process that could take time to complete. At the time, we needed a different kind of certificate for organizations that needed to get certificates faster for encrypted communications on less sensitive websites, which is why I was one of the inventors of Domain Validated (DV) certificates. The intention was to create a digital certificate that could be validated quickly where proof of website ownership was not as important for user security, such as blogs and information pages. We figured that limiting validation steps for DV certificates to proof of domain ownership would be sufficient because it would prevent fraudsters from getting certificates for domains they didn’t own. Unfortunately, DV certificates are now being used in a way that was never intended, leading to a surge in phishing attacks on fake websites encrypted with DV certificates. Encryption assures that sensitive data is safely communicated to the domain owner. However, the absence of a confirmed organization identity means the data can get transmitted safely to a bad actor trying to steal user information. To make websites even safer for users, I then joined a small group of co-inventors of the Extended Validation or EV certificate. EV certificates are issued only after a thorough and strict vetting procedure that follow standardized guidelines binding on all CAs. The EV certificates developed by the CA/Browser Forum are displayed in the browser address bar to confirm website identity, tell users who's behind the site, and offer potential recourse for any bad actions. We tested our hypothesis that users are safer at OV and EV sites by collaborating with ComodoCA, recognized as one of the leaders in DV certificate issuance worldwide. Our research paper, "The Relative Incidence of Phishing among DV, OV and EV Encrypted Websites," shows that over 99.5% of encrypted websites with phishing content use DV certificates, while there is almost no phishing associated with OV and EV websites. The data confirms our hypothesis that OV and EV certificates are safer for users than DV. But as safe as OV and EV websites are today, we want to make them even safer. This brings us to the London Protocol, under which five CAs from the CA Security Council are cooperating to improve identity assurance and minimize the possibility of phishing activity on identity websites. Each participating CA will work with its OV and EV customers to help them remove any phishing content on their websites to make identity websites even safer for users. This effort will help to counter the surge of DV phishing attacks across major brands and let users feel safer when visiting OV and EV sites. Read more about the London Protocol's phased approach and hear from the other member certificate authorities. - Cracking 2FA: How It's Done and How to Stay Safe - Microsoft Identity Bounty Program Pays $500 to $100,000 for Bugs - The Good News about Cross-Domain Identity Management - Facebook Must Patch 2 Billion Human Vulnerabilities; How You Can Patch Yours Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info.
<urn:uuid:9f53f781-261f-4310-a968-9d9b5d341b2f>
CC-MAIN-2022-40
https://www.darkreading.com/endpoint/london-calling-with-new-strategies-to-stop-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00513.warc.gz
en
0.946887
1,033
2.625
3
There is currently a big debate raging about whether Artificial Intelligence (AI) is a good or bad thing in terms of its impact on human life. With more and more enterprises using AI for their needs, it’s time to analyze the possible impacts of the implementation of AI in the cyber security field. The positive uses of AI for cyber security Biometric logins are increasingly being used to create secure logins by either scanning fingerprints, retinas, or palm prints. This can be used alone or in conjunction with a password and is already being used in most new smartphones. Large companies have been the victims of security breaches which compromised email addresses, personal information, and passwords. Cyber security experts have reiterated on multiple occasions that passwords are extremely vulnerable to cuber attacks, compromising personal information, credit card information, and social security numbers. These are all reasons why biometric logins are a positive AI contribution to cyber security. AI can also be used to detect threats and other potentially malicious activities. Conventional systems simply cannot keep up with the sheer number of malware that is created every month, so this is a potential area for AI to step in and address this problem. Cyber security companies are teaching AI systems to detect viruses and malware by using complex algorithms so AI can then run pattern recognition in software. AI systems can be trained to identify even the smallest behaviors of ransomware and malware attacks before it enters the system and then isolate them from that system. They can also use predictive functions that surpass the speed of traditional approaches. Systems that run on AI unlock potential for natural language processing which collects information automatically by combing through articles, news, and studies on cyber threats. This information can give insight into anomalies, cyber attacks, and prevention strategies. This allows cyber security firms to stay updated on the latest risks and time frames and build responsive strategies to keep organizations protected. AI systems can also be used in situations of multi-factor authentication to provide access to their users. Different users of a company have different levels of authentication privileges which also depend on the location from which they’re accessing the data. When AI is used, the authentication framework can be a lot more dynamic and real-time and it can modify access privileges based on the network and location of the user. Multi-factor authentication collects user information to understand the behavior of this person and make a determination about the user’s access privileges. To use AI to its fullest capabilities, it’s important that it’s implemented by the right cyber security firms who are familiar with its functioning. Whereas in the past, malware attacks could occur without leaving any indication on which weakness it exploited, AI can step in to protect the cyber security firms and their clients from attacks even when there are multiple skilled attacks occurring. Drawbacks and limitations of using AI for cyber security The benefits outlined above are just a fraction of the potential of AI in helping cyber security, but there are also limitations which are preventing AI from becoming a mainstream tool used in the field. In order to build and maintain and AI system, companies would require an immense amount of resources including memory, data, and computing power. Additionally, because AI systems are trained through learning data sets, cyber security firms need to get their hands on many different data sets of malware codes, non-malicious codes, and anomalies. Obtaining all of these accurate data sets can take a really long time and resources which some companies cannot afford. Another drawback is that hackers can also use AI themselves to test their malware and improve and enhance it to potentially become AI-proof. In fact, an AI-proof malware can be extremely destructive as they can learn from existing AI tools and develop more advanced attacks to be able to penetrate traditional cyber security programs or even AI-boosted systems. Solutions to AI limitations Knowing these limitations and drawbacks, it’s obvious that AI is a long way from becoming the only cyber security solution. The best approach in the meantime would be to combine traditional techniques with AI tools, so organizations should keep these solutions in mind when developing their cyber security strategy: Employ a cyber security firm with professionals who have experience and skills in many different facets of cyber security. Have your cyber security team test your systems and networks for any potential gaps and fix them immediately. Use filters for URLs to block malicious links that potentially have a virus or malware. Install firewalls and other malware scanners to protect your systems and have these constantly updated to match redesigned malware. Monitor your outgoing traffic and apply exit filters to restrict this type of traffic. Constantly review the latest cyber threats and security protocols to get information about which risks you should be managing first and develop your security protocol accordingly. Perform regular audits of both hardware and software to make sure your systems are healthy and working. Following these steps can help mitigate many of the risks associated with cyber attacks, but it’s important to know that your organization is still at risk of an attack. Because of this, prevention is not enough and you should also work with your cyber security team to develop a recovery strategy. As the potential of AI is being explored to boost the cyber security profile of a corporation, it is also being developed by hackers. Since it is still being developed and its potential is far from reach, we cannot yet know whether it will one day be helpful or detrimental for cyber security. In the meantime, it’s important that organizations do as much as they can with a mix of traditional methods and AI to stay on top of their cyber security strategy.
<urn:uuid:06bbdfb1-a986-4ec0-8e5b-9d1a36e9abf0>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/the-impact-of-artificial-intelligence-on-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00513.warc.gz
en
0.948705
1,126
3.375
3
Students will have the chance for their Raspberry Pi computers to be used in space, as British European Space Agency astronaut Tim Peake will be taking two to the International Space Station (ISS) on his next mission. Open to all primary and secondary schools in the UK, the Astro Pi competition will see students devise and code their own apps and experiments for use in space. The best two ideas will be sent to the ISS on Peake’s next six-month mission, with both being connected to an Astro Pi board. During the mission, Peake will deploy the Astro PI computers onboard the ISS, collect the data generated and download it to Earth, where the winning teams will receive it. Students have five themes to base their ideas on: spacecraft sensors, satellite imaging, space measurements, data fusion and space radiation. The competition will be supported by teaching resources developed by the UK's European Space Education Resource Office and Raspberry Pi. "I'm really excited about this project, born out of the co-operation among UK industries and institutions," said Peake. "There is huge scope for fun science and useful data gathering using the Astro Pi sensors on board the ISS. More on IT skills - Hour of Code to launch in UK - Tech Partnership makes TechFuture Girls coding clubs free for schools - UTC students design level crossing to learn real-world skills - Only 5% of Brits think IT is a fulfilling career choice “This competition offers a unique chance for young people to learn core computing skills that will be extremely useful in their future. It's going to be a lot of fun!” For primary schools, teams will be asked to think of an original idea for an experiment of application which can be conducted on the Astro Pi during the mission. The two best teams will work with the Raspberry Pi Foundation to code them ready for flight. For secondary school teams, there are three age categories – each of Key Stages 3, 4 and 5 in England and Wales, and their equivalent ages in Scotland and Northern Ireland. Phase one of the competition will enable students to submit their experiment and application ideas, with 50 submissions winning a Raspberry Pi computer and an Astro Pi board to code their idea. Phase two will require students to code their idea, with two winning teams selected in each category. The winning teams’ code will be readied for flight by the Raspberry Pi Foundation and CGI. Speaking at the launch of the competition, UK Space Agency chief executive David Parker also revealed the agency had been given a £2m programme to support further outreach activities around the mission, as part of the chancellor of the exchequer's recent Autumn Statement. According to business secretary Vince Cable, not enough people are being trained in the field of big data, despite so much technology relying on it. "This challenge helps the next generation to have fun while learning the skills that industry need," he said. “Creating tomorrow’s engineers is part of our industrial strategy that gives a long-term commitment to world-class skills.” In addition to the main prizes, a prize has also been offered by each of the UK space companies supporting the project.
<urn:uuid:e94a9a70-101f-445c-9df8-5321dd74b0b6>
CC-MAIN-2022-40
https://www.computerweekly.com/news/2240236580/Students-Raspberry-Pis-to-run-on-International-Space-Station
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00513.warc.gz
en
0.959331
651
2.640625
3
Transportation of the future: Hardt Hyperloop A revolutionary transportation network that will convey large volumes of passengers and cargo throughout Europe and the UK. In the hyperloop, vehicles will move through tubes which create the ideal conditions for low-energy travel while protecting the vehicles from the environment. The next phase in Hardt Hyperloop’s development is public adoption and Hardt was looking for a way to make the experience tangible. Hardt called on Accenture, so they could work together to create the Hyperloop Cabin Experience, a blend of a physical model of the interior of the hyperloop cabin and an augmented reality module that allows the user to experience travel in the hyperloop. The Hyperloop Network will connect cities, countries and even continents in a relatively short period of time. The CABIN-1 is a 3-meter segment of a hyperloop vehicle with a finished interior. Accenture created both a Microsoft HoloLens 2 experience and an iPad Pro app to virtually transform the CABIN-1 model into a full-size vehicle providing an immersive experience of what traveling by hyperloop will be like. The iPad experience is a supporting tool for the physical cabin experience, where the participant simply points the iPad camera at CABIN-1, and it allows them to physically walk inside the Hyperloop in true-to-life detail. The HoloLens experience, a self-guided 3-minute adventure, lets the passenger take a seat in the Cabin-1. Once seated, they will become a “Hypernaut” and experience a journey into the not-so-distant future of the European hyperloop transportation network. In the virtual experience, the passenger will travel on the first planned hyperloop route from Amsterdam, via Eindhoven and Dusseldorf, to Berlin, taking just over an hour. The passenger can compare the Hardt Hyperloop with other means of transport and understand how much faster, more energy-efficient and more comfortable the hyperloop will be compared to trains, cars, and planes. Valuable insights on the future of long-range travel Through the iPad experience and the HoloLens experience, Accenture has provided valuable insights into the future of long-range travel to the passenger. Together with Accenture, Hardt has created experiences that preview a world where distance does not matter. A world where the passenger can connect with who and what they care about with ease in a zero-emission transportation revolution. It will be a reality by 2028. The implementation of a pan-European hyperloop network would allow people to live how and where they want— having to commute a maximum of 60 minutes. HoloLens conveys the heightened value of hyperloop travel to passengers. When safely possible, CABIN-1 will travel to events & public transport hubs.
<urn:uuid:eaafc6ff-8ff7-47c9-9e39-1fc40c9fc9a4>
CC-MAIN-2022-40
https://www.accenture.com/nl-en/case-studies/technology/transportation-future-hardt-hyperloop
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00713.warc.gz
en
0.905913
582
2.515625
3
As the demand for experienced cyber security workers increases, our national security decreases. Some analysts believe that by as early as next year there could be a global shortage of cybersecurity professionals. With cyber-attacks becoming an increasing threat, enlisting and training a new generation of well-versed cyber security experts, as well as training current workers in the field, will aid in not only restoring the confidence of those who use the internet, but those who are just starting to experience the world wide web. Rebuilding our defenses online is the first step to a stronger, more confident nation. Some of you may be wondering what you can do to help, and the answer is simple! The National Cyber Security Alliance and the U.S. Department of Homeland Security both urge parents, teachers, and employers to motivate potential talent to pursue a career in cyber security. A cyber security professional needs an understanding that goes deeper than just math and technology. They need to be curious, passionate about learning, have a strong ethic and moral compass and be aware of the risks that come with the job. While all these ideas play an important role, at the end of the day, a profession in cyber security means having a passion in keeping our online world more secure and safer for all. To those who are worried about a boring job, fret not, for you will be at the front lines. While cyber security experts are behind the scenes, the roles they play impact our digital lives in big ways. Cyber security experts tackle catastrophic issues before they can detonate, causing massive issues for the internet. This profession is dedicated to protecting those online, keeping them more secure and safer from any threats they may face. A profession in cyber security builds important team-based skills and provides an environment for one to continue to learn and improve in skill. If you believe your student or child to be interesting in cyber security, there are steps you can take to aid them! You could volunteer at school or set up community workshops that help to teach children and adults about online safety and a career in cyber security. Try exposing students or your children to the opportunities in the field of cyber security by hosting an open house at your company to talk about what your cyber security department does. Inspire children to learn about cyber security by mentoring a team in a cyber challenge or hosting events and after school programs. Work with schools or community organizations to create an internship program for hands-on learning. For parents, become knowledgeable about the educational steps to a career in cyber security and about organizations that host events for cyber security. For those in college searching for a job in cyber security, get credentials. Four out of five cyber security jobs require a college degree. Do volunteer work and internships so that you can become more experienced in the field itself. Offer help to you IT professors at college or employer to gain more experience. Read about the latest advancements and breaches regarding cyber security. Pay attention to how these breaches occur and how they were fixed. If you are interested but not sure if cyber security is right for you, take a look at the National Initiative for Cyber security Careers and Studies (NICCS). NICCS has career resources for learning more about jobs in the field, as well as guides for learning about how to join a cyber security team! At Hammett Technologies we put your online security as a top priority. We treat your network as our own, with regular maintenance and updates to keep your company’s data secure. Be with a team you can trust, become a Hammett Technologies Partner today! Still curious as to what we can do to help your company grow? Click here to find out more!
<urn:uuid:b8443bcf-d588-468e-936e-15e5bdac7cdf>
CC-MAIN-2022-40
https://www.hammett-tech.com/our-nations-cyber-security-is-at-risk-but-you-can-help/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00713.warc.gz
en
0.960165
739
3.28125
3
RAMpage is a recent vulnerability that bypasses all the security measures put in place to secure Android devices against the Rowhammer attack. It can gain root privilege to an Android device and steal any data available on it. To understand the RAMpage attack better, we need to first understand the dynamic random access memory (DRAM) Rowhammer vulnerability and the Drammer attack. DRAM Rowhammer vulnerability The Rowhammer vulnerability first made waves in 2012 and involves a problem with the latest DRAM chips. When a row of memory is rapidly and continuously accessed—or “hammered”—adjacent rows of memory can experience bit flips. For example, cybercriminals can flip bits from 0 to 1 or vice versa, in search of exploiting this vulnerability. Researchers from Google have demonstrated how effectively this vulnerability can be exploited by introducing the double-sided Rowhammer attack, which doubles the chances for a row to experience bit flips by hammering it from two different directions. Of course, not all Rowhammer attacks are successful; there’s always potential a bit flip will corrupt the data cybercriminals are looking to steal. Put simply, a Rowhammer attack is a bit flip of a row of memory. The Drammer attack was the first official exploitation of the Rowhammer vulnerability in the wild, which was completed using a malicious app that ran on Android devices without any permissions or application vulnerabilities. The Drammer attack utilizes direct memory access (DMA) which is offered by Android’s memory manager ION. DMA allows direct access to the memory location without accessing the CPU cache, which makes it quick and easy to hammer away at a row of memory. Another advantage for Drammer is the way ION organizes contiguous memory. The kmalloc heap (one of the several kernel heaps) was designed to divide physically-adjacent memory, but this simply showed attackers how the physical and virtual addresses were connected. Google’s workaround for Drammer After analyzing Drammer’s capabilities, Google pushed an update for Android that disabled the ION’s function of contiguous memory, which affected exploitation of the Rowhammer vulnerability by Drammer. Now, what is the RAMpage attack? After all the mitigations for Rowhammer, security researchers have identified another threat called RAMpage. Below are some scenarios that create the ideal environment for RAMpage to breach a network. Researchers have stated there are three variants of RAMpage. These vulnerabilities are not easy to exploit, and cybercriminals would need a fair amount of knowledge in order to exploit any one of RAMpage’s three variants, but that doesn’t mean you should let your guard down. There are plenty of cybercriminals with all the know-how they need to get the job done. How to combat RAMpage Researchers have come up with a solution called GuardION, which introduces dummy or guard rows to isolate the DMA buffers. GuardION is a patch for Android operating systems that modifies the ION memory process by introducing empty rows in front of and behind the targeted row, rendering RAMpage’s efforts ineffective. GuardION comes with a price Installing GuardION may negatively affect your device’s performance, as introducing blank rows consumes DRAM. What’s the best advice for mitigating RAMpage? RAMpage cannot be identified or detected, but avoiding downloading apps from unknown sources can help you stear clear of RAMpage. However, as any system administrator can attest, this is easier said than done. Even after educating users on the dangers of downloading games or anonymous applications, it still happens, which means the risk of RAMpage is always there. Employing a mobile device management (MDM) solution can help you easily mitigate RAMpage. With Mobile Device Manager Plus, our comprehensive MDM solution, you can blacklist and whitelist apps, so users can only download apps you trust. Rather than deploying the GuardION patch, blacklisting apps can be an effective form of combat against the RAMpage attack. Download Mobile Device Manager Plus to mitigate RAMpage without slowing down your Android devices. Note: All Android devices dating back to 2012 can be affected by RAMpage.
<urn:uuid:f682efb2-56af-4abd-851a-cd36364e6891>
CC-MAIN-2022-40
https://blogs.manageengine.com/desktop-mobile/mobile-device-manager-plus/2018/07/25/got-new-android-device-might-rampage-attack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00713.warc.gz
en
0.915357
871
3.03125
3
Researchers at a Japanese university have devised an artificial intelligence (AI) based test to help detect early signs of dementia in patients. It is hoped that the new clinical system will lead to better treatments of the disease earlier, as it doesn’t involve invasive medical imaging. Currently, scans are used in conjunction with cognitive function tests, such as the Mini-Mental State Examination (MMSE) – a combination that can be expensive, time-consuming, and distressing for some patients. However, an ageing population means that more people are developing the disease, hence the need to pursue faster, smarter, and easier-to-use tests. Helping patients to identify problems sooner may also help them stave off the effects of some conditions, and manage them more effectively. A group of researchers from Osaka University and Nara Institute of Science and Technology have now demonstrated that it is possible to detect dementia simply from conversations between humans and AI agents. The technique uses machine learning to study the way that elderly people speak when answering simple questions from a computer avatar. The model deploys the interactive characters to create a dialogue with elderly patients, recording their speech patterns, language, and faces. The AI system then evaluates a subject’s gaze, response times, and intonation, and analyses which nouns and verbs they are using. According to tests of the machine learning system, it was able to distinguish those suffering from dementia from healthy people in a control group. Researchers claim that the success rate was around 90 per cent, from asking just six questions. The questions are based on internationally recognised criteria contained in the Diagnostic and Statistical Manual of Mental Disorders. Senior research author Takashi Kudo said, “If this technology is further developed, it will become possible to know whether or not an elderly individual is in the early stages of dementia through conversations with computer avatars at home on a daily basis. It will encourage them to seek medical help, leading to early diagnosis.” Plus: Flexible ‘postage stamp’ can take blood pressure In related health technology news, researchers have developed a flexible, adhesive patch that can monitor a patient’s blood pressure, according to the MIT Technology Review. Made of silicon elastomer, the postage-stamp sized wearable works by sending ultrasonic waves into the skin, which reflect off the wearer’s bodily tissues and blood. In theory, the patch could be used to monitor patients at home, with the data collected over time and analysed on a laptop. As well as avoiding the need for multiple appointments, uncomfortable pressure tests, and invasive procedures, the wearable may help cut costs and reduce the risk of infection. The system is being developed at the University of California, San Diego. Additional reporting: Chris Middleton. Internet of Business says The combination of wearable devices and AI has been a major development hotspot in 2018, as these recent Internet of Business reports explain:- - Read more: AiServe develops A.I. wearable for the blind and partially sighted - Read more: Health IoT: Wearable can predict older adults’ risk of falling - Read more: Healthtech: Wearable helps injured athletes recover faster - Read more: Mind-reader: MIT’s AlterEgo wearable knows what you’e going to say - Read more: Health IoT: New wearable can diagnose stomach problems - Read more: Health IoT: Scientists develop diet wearable – for your teeth - Read more: Consumer wearables can detect major heart problem Our Internet of Health conference is taking place in Amsterdam on 25-26 September. Click the logo, below, for more details.
<urn:uuid:cb322d16-d1c1-4c40-9096-9c17298fc061>
CC-MAIN-2022-40
https://internetofbusiness.com/health-tech-a-i-detects-dementia-smart-plaster-takes-blood-pressure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00713.warc.gz
en
0.929715
759
3.484375
3
Online Certificate Status Protocol (OCSP) What is OCSP? The Online Certificate Status Protocol (OCSP) is an alternative to the certificate revocation list (CRL) and is used to check whether a digital certificate is valid or if it has been revoked. The OCSP is an Internet Protocol (IP) that certificate authorities (CAs) use to determine the status of secure sockets layer/transport layer security (SSL/TLS) certificates, which are common applications of X.509 digital certificates. This helps web browsers check the status and validity of Hypertext Transfer Protocol Secure (HTTPS) websites. What is a Certificate Authority? CAs are central to issuing and managing digital certificates, ensuring secure communications, and verifying user identities. They do this through the public key infrastructure (PKI) X.509 certificate, which contains information like the owner’s name and public key, the name of the issuing CA, the certificate’s validity date, and what it can be used for. CAs provide a digital signature to prevent this information from being modified, then use a private key to verify a digital certificate. Anyone who has that public key can use it to generate a signature on the certificate signing request (CSR). Why Is Certificate Revocation Important? Digital certificates are vital to guaranteeing trust on the internet, like a digital identification card for websites. A web browser requires any HTTPS website to provide a certificate that validates its hostname and a private key. Take note that if an attacker is able to obtain access to a private key, they can impersonate the website. So certificate revocation is crucial to mitigating vulnerabilities and potential key compromise. The website's owner can revoke a certificate by informing the issuer that the certificate should not be trusted. A good example of this is Cloudflare revoking all managed certificates when the Heartbleed vulnerability was found capable of stealing private keys. How Does OCSP Work? When a certificate validity request is made, an OCSP request is submitted to an OCSP responder, which is a server operated by the issuing CA. The OCSP responder checks the request’s validity with a trusted CA, which advises whether the certificate is valid or not, with a response of current, revoked, or unknown. Most popular, widely used web browsers support OCSP, including Apple Safari, Internet Explorer, Microsoft Edge, and Mozilla Firefox. OCSP and CRL Web browsers use several methods to check if a site’s certificate has been revoked. OCSP and CRL are two of the most common. A CRL is a list containing serial numbers of all certificates that have been revoked by a CA. However, CRLs can present issues, as they can become outdated and have to be downloaded. OCSP security is a protocol used to discover the revocation status of a certificate and contains signatures that assert a certificate has not been revoked. This makes it a more effective and efficient validation process, as it does not require a list to be downloaded to discover the status of a certificate. OCSP checking does cause problems of its own, including increasing costs for CAs and concerns around privacy. For example, live OCSP checking can leak private browsing data, as requests are sent on unencrypted Hypertext Transfer Protocol (HTTP) traffic and tied to specific certificates. Therefore, sending a request tells a CA which websites a user visits, and anyone on the network path between their browser and the OCSP will see the sites they visit. It can also create browser performance issues, such as slow browsing experiences caused by third parties confirming the validity of a certificate. Some of these issues can be addressed through OCSP stapling, a technique that delivers revocation information to browsers. The certificate stapling process involves a current OCSP response being stapled into the HTTPS connection. This requires less traffic between the server and the browser, which then no longer has to request the OCSP itself. How Fortinet Can Help? Certificate management is crucial to creating, storing, and revoking digital certificates. It can be provided through the Fortinet identity and access management (IAM) solution, which allows organizations to confirm the identity of their devices and users as they enter a network. This ensures that organizations can securely connect only the right users to only the resources they should have access to. The Fortinet solution includes the FortiAuthenticator, which provides access management, authentication, and single sign-on (SSO) to prevent unauthorized access to networks and limits users to only access the right resources. This is key to creating effective security policies, protecting sensitive data and networks, and providing appropriate access control levels.
<urn:uuid:878b8e55-52c2-442c-b4ac-9fc5e4f43b7a>
CC-MAIN-2022-40
https://www.fortinet.com/resources/cyberglossary/ocsp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00713.warc.gz
en
0.910277
961
3.421875
3
The Economist Intelligence Unit has launched an online tool that is designed to tally the bill from cyberattacks. Incidents of cybercrime are reported in the media almost every day, yet reliable estimates of their financial impact on companies are few and far between. CyberTab, sponsored by Booz Allen Hamilton, is designed to address this gap. Tool users, including information-security, risk, financial and other senior executives, can input a range of expenses and estimated costs for either a specific attack scenario or actual breach. CyberTab will then generate a comprehensive report that explains the total cost and enables a cost-benefit analysis of security strategies. The tool does not require information that would identify executives or their companies. It also does not collect data without opt-in permission and is free to use. Riva Richmond, editor at The Economist Intelligence Unit, said: “Today cyberattacks are “not if, but when’ for nearly all companies—and the financial fallout is severe and likely to get worse. Yet deep corporate fear about disclosure has led to a culture of secrecy that has hurt our ability to understand the size and shape of the problem we face. It also makes it harder for companies to learn from each other and devise more robust defence strategies. “Executives can gain new insight into their own company’s risks by using CyberTab, and can do so anonymously and leave no trace of their data. But we want people to be part of the solution and take part in our research programme. By submitting data anonymously, they will be taking a step towards a broader understanding this complex problem.”
<urn:uuid:ffbd1eaf-33da-4b3b-9b60-f43f2284da39>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2014/04/01/free-tool-calculates-the-damage-of-a-cyber-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00113.warc.gz
en
0.950879
328
2.53125
3
|Developing Internet Applications| This chapter explains what is meant by an Internet application, and what its different components are. An Internet application is a client/server application which uses standard Internet protocols for connecting the client to the server. You can use exactly the same techniques to create a true Internet application, which is available publicly through the World-Wide Web, or to create an intranet application. An intranet application is one which runs on your corporate intranet, and is only available to the staff in your corporation. Whenever we talk about Internet applications, we mean either true Internet applications or intranet applications. For an introduction to the World-Wide Web, see the appendix Introduction to the World-Wide Web. Internet applications are thin-client, thick-server. This means that the client end, the part the end-user sees and interacts with, is only responsible for the user interface. The client runs on a Web browser - the standard tool for accessing the Internet. All the processing is done at the server end - where your corporate data is. Because your applications use standard Internet protocols for client/server communications, you can make your applications cross-platform. The server-side programs are written in Micro Focus COBOL, and so you can run them on Windows NT or UNIX servers (you need to purchase Micro Focus COBOL for UNIX to run applications on UNIX). The server-side program for an Internet application communicates with the client through the Web server software for the machine. The interface between a COBOL program and the Web server running it is transparent to the programmer, and you can use any of the three industry-standard Web server interfaces without code changes: These are explained in more detail in the chapter CGI, ISAPI and NSAPI Programs. By default, all applications created with NetExpress are built for use with CGI (supported by all Web servers), and we recommend carrying out development and debugging using CGI. You can convert any COBOL CGI program to ISAPI or NSAPI by changing NetExpress compiler and build settings, and rebuilding the program. The client-side user interface can be written using any mix of the following: For more information, see the chapter Forms and HTML. Form Designer also enables easy scripting to add extra client-side functionality. You can carry out common validation functions (see the chapter Form Validation), or add your own scripts with the help of the Script Assistant (see the chapter Client-side Programming). Internet applications are client/server applications, and can be split into two pieces: The form is the part your end-user sees. It is displayed in a Web browser, and provides controls by which your end-user can enter data. The picture below shows you a sample form: Figure 1-1: A sample form for an Internet Application When the end-user clicks the Send Form button, the information on the form is packaged up, and sent to a server-side program. The server-side program only runs when it is started from a form, or from a link on a Web page. The server-side program processes the information on the form, and returns a page to the end-user. Depending on what the program does, the result is displayed on a form on the page returned, or perhaps as text. The example above is very simple. Real applications are likely to be more complex, and could consist of several forms and server-side programs linked together. We have categorized server-side programs into two types: A symmetric server-side program uses the same form for input and output. For example, a database query/update program presents you with a set of fields for a record or a SQL query. You use the same fields to enter data to query the database, as the program uses to return you the result. Figure 1-2: A symmetric server-side program An asymmetric server-side program uses a different form for input and output. For example, an order entry program which uses one form for you to enter the customer details, and then displays a second form on which you enter a new order. Figure 1-3: An asymmetric server-side program Asymmetric server-side programs enable you to build up complex applications by chaining together different forms and server-side programs. The diagram below shows an application where the output from the first program starts a second server-side program. The second program outputs different forms depending on its processing path. Figure 1-4: A more complex application Web browsers display HTML pages. An HTML page can contain one or more forms. A server-side program can only receive input from a single form. The server-side program returns a page to the browser, which can also contain one or more forms. The basic unit of input to a server-side program is always a form, and the basic unit of output is always a page. But most pages only contain a single form, and since most of your focus when you are creating applications is on forms rather than pages, the diagrams only show pages. Internet applications have the flavor of traditional CICS applications. If you think of the form as a BMS input screen, and a CICS transaction as the server-side program, then the flow of control is very similar. Like a CICS transaction, a server-side program only runs for long enough to process some data and return a result. A complex CICS application consists of several input screens, and several transactions. The end-user has the illusion that the application is running all the time, when the application consists of a series of transactions which run for a short time and then disappear. An application based on a symmetric server-side program (see the previous section for an explanation of this term) works like this: This runs the server-side program again. There are many possible variations on the above sequence. For example, the sample CGI application supplied with NetExpress (in netexpress\base\demo\cgi\cgiprg1.app) starts with an input form, which executes a CGI program that returns a simple HTML page. The types of asymmetric server-side program discussed briefly in the previous section may be a good deal more complex, but the essential point is that server-side programs only run long enough to return a result. Copyright © 1998 Micro Focus Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Developing Internet Applications|
<urn:uuid:48a46859-8b0e-4fe8-96bf-8d37564036cd>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/net-express/nx30books/piover.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00113.warc.gz
en
0.885339
1,349
3.234375
3
For many people within healthcare and science, big data has the potential of greatly answering some of the most important questions we face daily. From unlocking the secrets of human nature to understanding the trajectory of illness, there's a lot that predictive modeling software and healthcare performance management can do overall. However, there are limits to what data can offer for an individual. With the sheer volume of information available for even a single case, it's becoming increasingly apparent that making any sort of progress in these fields requires considerable collaboration on the part of doctors and scientists alongside data professionals. Data is no longer the problem, management is To understand the considerable strides made in science using data, Nature Magazine noted genome sequencing became more affordable since the completion of the Human Genome Project. In 2001, a human-sized genome cost $10 million to sequence. This price eventually fell to half a few years later. Now, it's possible to complete a person's sequence for $1000 per genome, along with more specific studies such as unique mutations among certain individuals in the population. With so much information pouring in due to the affordability of sequencing one's genes, the newest bottleneck appears from handling all the data. With it comes two issues. The first is the lack of collaboration between data professionals such as bioinformaticians and statisticians and the scientists who need the research material. To address this concern, strengthening communications between these groups is essential. Any big data-related project should take strides in addressing this concern directly, and some like the Blue Brain Project are making progress on that front. The second is the extremely competitive environment that is common in research science recently, with some scientists worried their research will get copied and they won't get credited for any finds. This requires publications that encourage sharing to make sure credit gets received and that any scientist who uses the data acknowledges prior users. Such matters of fostering collaboration are equally important in medicine, where lives are on the line. One particular subject worth noting is the joint efforts or the New York Genome Center and IBM. The former organization noted in a release that they are working closely with the latter company's Watson platform to create a national tumor registry based on genetic characteristics. With this method, part of President Barack Obama's Precision Medicine Initiative, researchers hope to create effective treatments for cancer patients and demonstrate the benefits of collaboration in science.
<urn:uuid:e9087f4a-ec89-4252-8279-e0d2068ae461>
CC-MAIN-2022-40
https://avianaglobal.com/big-data-in-genetics-is-meaningless-without-collaboration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00113.warc.gz
en
0.951412
472
2.953125
3
How to Keep Your Data Secure in Light of Apache Log4j VulnerabilitiesDownload Case Study In quick succession in December, The Apache Software Foundation released information on two critical vulnerabilities in its Log4j Java-based library. The first vulnerability CVE-2021-44228, also known as Log4Shell or LogJam, was reported as an unauthenticated remote code execution (RCE) vulnerability. By exploiting how the library logs error messages, it could lead to a complete system takeover. Due to its critical nature and the ease of execution, it has received the highest possible Common Vulnerability Scoring System (CVSS) score of 10. The second vulnerability, CVE-2021-45046, was discovered shortly after the initial exploit was patched. It is rated 3.7 out of 10 on the CVSS and would lead to a denial of service (DOS). At the time of writing, patches have been released to address both vulnerabilities. Why should I care about Log4j? Log4j is one of the most widely-used logging libraries in the world. Its adaptable logging capabilities make it useful across any type of infrastructure or application. Countless enterprise, government and open-source applications use Log4j. As a result, everyone has been rapidly patching their software since the vulnerabilities were announced. Remote code execution (CVE-2021-44228) The potential scope of the initial RCE vulnerability CVE-2021-44228 is astounding. Any device or app connected to the internet running Log4j versions 2.0-2.14.1, is at risk. In addition, exploiting the vulnerability is relatively straightforward. By simply sending a malicious string that then gets logged by the application, attackers can exploit a feature in log4j that can be used to retrieve information. In this case, attackers use the Java naming and Directory Interface (JNDI) to make an external network request for the malicious payload in the form of a Java file. From there, the attacker would be free to deliver whatever malware or backdoor entry to the infrastructure. Denial of service (CVE-2021-45046) The second vulnerability CVE-2021-45046 was uncovered shortly after the initial patch was released. According to the CVE description, the initial patch was “incomplete” and this new exploit “could allow attackers… to craft malicious input data using a JNDI lookup pattern resulting in a denial of service (DOS) attack.” Key Action: Apply the latest patch as soon as possible Update any server, app or resource that uses Log4j with the latest patch immediately. This patch includes coverage for both the latest DOS vulnerability and the original RCE vulnerability. How SASE can help protect you against this type of risk As soon as the proof of concept (PoC) exploit was released on Github, threat actors began actively scanning the internet for vulnerable assets. Lookout customers who use the Lookout Security Platform, our Secure Access Service Edge (SASE) solution, are equipped with several ways to protect their sensitive data and mitigate risks associated with this vulnerability. Restrict access to private apps To mitigate against the possibility of data exfiltration, organizations should restrict access to its apps running on Infrastructure-as-a-Service (IaaS) and on-premises data centers using Zero Trust Network Access (ZTNA). By implementing user-to-app segmentation with ZTNA, the apps are cloaked and not openly accessible via the internet. In addition, ZTNA limits the possibility of attackers using stolen credentials to access these resources, move laterally and discover sensitive data to exfiltrate. Monitor user and app behaviors Organizations should implement defense-in-depth strategies by closely monitoring both the user and app behaviors. By flagging behavior indicative of an exploit, such as an anomalous login location or unusual file download volume, you will be able to detect and respond to malicious activities across your cloud and on-prem infrastructure as well as your endpoint devices. Encrypt sensitive data Lookout SASE also has integrated enterprise digital rights management (E-DRM) to encrypt data so that only authorized users have access even if it’s passed around offline. Sensitive data is dynamically identified by data loss prevention (DLP) with exact data match (EDM) and optical character recognition (OCR), then classified as sensitive by Microsoft AIP and Titus. This enables you to build a number of data access and protection policies that apply to all cloud-based or on-premises data. Monitor connected apps Connected apps are third party apps that integrate via authorization tokens like OAuth or JSON Web Tokens with platforms such as Office 365, Google Workspace, and Salesforce. These apps remain mostly invisible but can still upload and download data from the platforms with which they’re integrated. Lookout SASE can discover and monitor connected apps for suspicious activities while also providing remediation, alerting, revocation of access, and blocking capabilities. To learn why integrated platforms will reduce gaps and vulnerabilities, download a complimentary copy of Gartner® Predicts 2022: Consolidated Security Platforms Are the Future.
<urn:uuid:55477497-81a4-4a2b-b1c4-a63d678bc047>
CC-MAIN-2022-40
https://www.lookout.com/blog/protect-against-apache-log4j?utm_source=blog&utm_medium=web
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00113.warc.gz
en
0.930396
1,085
2.84375
3
The world of networking has a language of its own which is continually evolving as new technologies emerge, innovative ways of delivering network services are deployed, and global connectivity becomes increasingly essential. While the list of “must-know” terms is too long to cover in a single blog, here are some to incorporate into your vocabulary as you evaluate how cloud networking can benefit your organization. - BYOD (Bring Your Own Device) — The practice of employees using their personal mobile devices to do their jobs, typically requiring connection to enterprise networks and accessing enterprise data and cloud applications. This trend brings with it many challenges, which are described in detail here. - Firewall as a Service (FwaaS) — a firewall delivered as a cloud-based service. Unlike appliance-based firewalls that require management of discrete firewall appliances, FwaaS is a single logical firewall in the cloud that can be accessed from anywhere. Click here for a detailed overview of FwaaS. - Hybrid Wide-Area Network (Hybrid WAN) — a type of wide area network that sends traffic over two or more connection types. For example, an MPLS connection and an Internet connection. . - Internet Backhaul — moving large amounts of data between major data aggregation points. Although it’s expensive, organizations often use MPLS to backhaul branch traffic to their corporate data centers to secure traffic and enforce policies. - Jitter — the varied delay between packets that can result from network congestion, improper queuing, or configuration errors. See also WAN Latency. - Metro Ethernet — also known as “carrier Ethernet,” is an Ethernet-based network in a metropolitan area used for connectivity to the public Internet, as well as for connectivity between corporate sites that are separated geographically. - MPLS (Multiprotocol Logical Switching) — a technology for moving traffic between locations. Services based on the MPLS technology represent the traditional approach for providing predictable connectivity between locations . - NFV (Network Functions Virtualization) — abstracts network functions, so they can be installed, controlled, and manipulated by software that runs on standardized compute nodes. - Network Throughput — the rate of successful message delivery. It is affected by latency, packet loss and WAN optimization. - QoS (Quality of Service) — the capability of a network to provide better service to selected traffic, including dedicated bandwidth and controlled jitter and latency. - Secure Web Gateway — a cloud-based solution that filters unwanted software/malware from user-initiated Internet traffic and enables granular and central security policy creation. - SLA (Service Level Agreement) — an agreement between a service provider and the customer that describes the products or services to be delivered, and outlines scope, quality, and responsibilities. - SDN (Software-defined Network) — a network architecture that separates the control and data planes in networking equipment. Network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from applications. - SD-WAN (Software Defined Wide Area Network) — an application-based routing system, rather than a traditional, packet-based network routing system. It uses SDN to determine the most effective way to route traffic to remote locations. For a detailed overview of SD-WAN and its benefits for global business, refer to our explainer here. - VPN (Virtual Private Network) — a network technology used to create a secure network over the Internet or any private network. It links two or more locations on a public network as if they are on a private network. - Latency — the time needed to reach a destination. Latency is typically measured from one destination and back (called round trip time or RTT). See also Jitter. If you’re interested in learning more about the concepts behind these and other cloud networking terms, make sure to subscribe to our blog.
<urn:uuid:bb1f117f-d3b8-4884-ac28-6671c9c48cd7>
CC-MAIN-2022-40
https://www.catonetworks.com/blog/networking-glossary-top-16-networking-terms-everyone-should-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00113.warc.gz
en
0.921475
812
2.578125
3
It is crucial to understand pegged requirements to maintain an efficient supply chain. Pegged requirements are when you tie a specific sales order or purchase order to a particular order of production. This blog post will explain pegged requirements and how they work in SAP and supply chain management. Stay tuned for more information! Pegging in SCM Pegging is the process of tying together two or more orders to create a master and dependent relationship. Pegging can be done with different types of sales and purchase orders. Pegging allows for better visibility into your supply chain and better planning, forecasting, and scheduling for production management. Therefore, pegged requirements are a very effective tool for supply chain management. Pegging allows you to see the future demand for your products and manage your production schedule accordingly. Why is pegging needed? When a firm produces items, it first predicts how much of each product it will sell in the future. Then, based on this estimate, the firm calculates how much of each raw material and component it will require to manufacture the necessary amount of goods. Forecasting future demand is never an exact science, so businesses frequently must make educated guesses. Companies may utilize “pegged requirements” to help them predict their sales. This means they establish a need for a particular raw material or component based on their higher-level forecast. To put it another way, if the higher-level forecast changes, so do the need for the relevant raw material or component. What is Pegged-Requirement? The pegged requirement in production planning is a way to show the next level parent item (or customer order) as a source of demand. This allows you to track where your materials are going and can help with inventory management, traceability, etc. [The pegged requirement is a requirement that shows the next level parent item (or customer order) as the source of the demand (i.e., by using the where-used capability from the BOM).] The following diagram attempts to show the parent and child relationship mentioned above. What is pegging in SAP? Pegging is a mechanism that allows you to display the relationships between different objects in the SAP system. For example, the system automatically pegs that order to the relevant customer account when creating a sales order. Pegging can also display the relationships between materials, production orders, and more. In addition to providing an overview of relationships between objects, pegging can also be used to trace data origins or track changes over time. For example, if you need to know where a particular material came from, you can use pegging to trace its origins back to the original purchase order. Similarly, if you need to track how a production order has changed over time, you can use pegging to view a history of all the modifications that have been made. In short, pegging is a powerful tool that can be used for various purposes in the SAP system. What are pegged requirements in SAP? In SAP, Pegged Requirements are called Dependent Requirements. For example, in the Materials Management module (MM), there is a function called “Create dependent requirements” (transaction code MB51). This transaction allows you to create pegged requirements between two sales orders, between a purchase order and a production order, or between two production orders. The dependent requirement will be created as soon as the first order is released. Pegged requirements is a way of forecasting future demand by setting requirement for certain types of raw materials or components dependent on higher-level forecasts. Companies do this to help them forecast, as it can be complex and not an exact science. Hoping this post was helpful!
<urn:uuid:0e23e3c1-1f8c-4553-a8bf-14ee531369b6>
CC-MAIN-2022-40
https://www.erp-information.com/pegged-requirements.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00113.warc.gz
en
0.923348
752
2.765625
3
Microsoft Argues Cloud Boosts Green Computing Providing compute and storage on demand and reducing costs are not the only benefits of cloud computing. Moving to the cloud can also can also reduce the carbon footprint of an enterprise, according to a report released by Microsoft today. By shifting computing operations to the cloud, organizations can reduce their carbon emissions by 30 percent, according to a study that is the basis of the report. Commissioned by Microsoft, the study was performed by IT outsourcing firm Accenture and WSP Environment & Energy, an environmental consulting organization. One might argue it states the obvious, but the study is based on what Microsoft describes as a lifecycle analysis that calculates the environmental impact of IT products or services throughout the span of their implementations. Enterprises that run Microsoft's Business Productivity Online Services, including Exchange Online, SharePoint Online and Dynamics CRM online, can reduce emissions by 30 percent compared to running the same applications in house. For smaller organizations with 100 users, making the move can reduce emissions by more than 90 percent, the study concluded. For mid-sized organization with approximately 1,000 users, the range is between 60 and 90 percent, according to the report. According to an executive summary of the report, the drivers for reducing emissions are as follows: - Dynamic Provisioning: Reducing wasted computing resources though better matching of server capacity with actual demand. - Multi-Tenancy: Flattening relative peak loads by serving large numbers of organizations and users on shared infrastructure. - Server Utilization: Operating servers at higher utilization rates. - Datacenter Efficiency: Utilizing advanced data center infrastructure designs that reduce power loss through improved cooling, power conditioning, etc. While the report acknowledges that many large organizations can reduce energy use and emissions on their own, it argues that those operating large public cloud services "are best positioned to reduce the environmental impact if IT because of their scale." Called Cloud Computing and Sustainability: The Environmental Benefits of Moving to the Cloud, the report can be downloaded here. Jeffrey Schwartz is editor of Redmond magazine and also covers cloud computing for Virtualization Review's Cloud Report. In addition, he writes the Channeling the Cloud column for Redmond Channel Partner. Follow him on Twitter @JeffreySchwartz.
<urn:uuid:9506be81-1815-4d33-a3e4-351001f1af32>
CC-MAIN-2022-40
https://mcpmag.com/articles/2010/11/04/microsoft-argues-cloud-boosts-green-computing.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00113.warc.gz
en
0.937992
469
3.09375
3
Last year the world was affected by a mass-scale XOR.DDoS attack against Linux PCs at a rate of over 150 Gbps. The malware in question, Malware.XOR.DDoS, was detected in 2014 and has been the subject of many research analyses. While the original attack targeted Linux, the newer version can also attack Windows PCs, turning them into ‘zombie’ PCs through the Command & Control (C&C) server. The XOR.DDoS creates huge volumes of data and meaningless strings in the SYN flood attack, which CDNetworks says is a serious threat as most companies do not have the network processing capacity to deal with the data. In addition, the attack uses TCP, which the small network line can’t block. The report found that 77.1% of the attacks have occurred in China and the United States, mainly in Linux servers that use cloud services and in large-scale cloud service providers, the report found. It suggests that SSH Services (22/TCP) are being used in most attacks, cloud systems without proper security management are most likely to have been hacked. CDNetworks says the SYN and data flooding can theoretically be blocked if SYN packets with data are detected. The company recommends using a SYN cookie that is effective against spoofing attacks. The cookie compares sequencing the SYN and if they are not identical, the packet is discarded. Alternatively, First SYN DROP can be another effective method of blocking attacks. “This technique works by saving the first SYN packet information in the memory and dropping the packet. If the session request is normal, the same IP will send the SYN request again. If the request is made for attack, another SYN request from another IP will be received,” a statement from CDNetworks says. The company recommends investing in a large-scale network line to counteract large TCP attacks, such as in the case of XOR.DDoS.
<urn:uuid:315ae300-2eff-4c44-93ae-4f44233c34be>
CC-MAIN-2022-40
https://www.cdnetworks.com/news/cdnetworks-explains-the-brute-force-of-xor-ddos-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00113.warc.gz
en
0.950402
411
2.703125
3
More than 60% of Americans use social media platforms. Given their growing popularity, many websites now offer the option to sign in using social credentials instead of a username and password. No more filling out endless signup forms just to make an online purchase? No more scrambling to remember which email address, username, and password combination you used? It almost sounds too good to be true. Turns out that may be the case. Social logins seem like an easy way to handle an ever-growing number of accounts, but could they cause more harm than good? One login means one password In theory, only having one password to sign you in anywhere seems great. It means there’s no chance of forgetting the credentials for your bank account, or which username and password to use to order more cereal on Amazon. Unfortunately, password reuse makes you more vulnerable to data breaches. It only takes one breach to expose your information, giving hackers the keys to every account you’ve used that password for. And if you use the same password for your social media account, anyone with your password can access every account protected by your social login. Something that was so simple to use could potentially turn into a nightmare that’s almost impossible to stop. How secure is your data? Facebook, Twitter, LinkedIn, Google. Between them, these four Internet giants have millions of users. Which means they have a lot to lose in the event of a password breach or hack and they take the necessary steps to protect against it. However, that doesn’t mean that the website you sign in to with your social account has the same level of protection in place. With access to so much of your personal data, there’s a lot at stake in the event of a hack or data breach. Signing in to a website with your social credentials gives you full access, but social logins can be a two-way street. When you sign up and sign in using your social media accounts, you could be granting that site permission to access everything that’s stored in your social profiles. This can include all your interests, relationships, friends, locations, and even media preferences. So, if I used Facebook to sign in to Pinterest and Pinterest was hacked, the hackers would have access to my boards full of crochet projects and ridiculous food ideas I’ll never make. But they could also end up with access to my entire list of friends, location data, and even personal identifying information like my phone number, birthdate, and family members’ names. Having all that information leaves me vulnerable to further hacking and social engineering, with the potential to expose even more of my data. What if things change? For most websites, after you’ve created a username and password, you can edit your information as needed. Which means if you ever retire an email account, you can replace the original address with a new one to ensure that you won’t ever get locked out. Unfortunately, this isn’t as simple if you created your account with a social login. When I first used Spotify, I was impatient and just wanted to get started without having to fill out a bunch of information and wait for a confirmation email. So I clicked the “Login with Facebook” option and minutes later I was dancing around my room to an awesome ’90s mix. Pretty awesome, right? It was at first, but now, a few years later, I’ve started to think about closing my Facebook account. Only problem? My Spotify account. It turns out that if you created your account using a Facebook login, it’s not possible to disconnect the two. Which means if I shut down Facebook, I lose access to my Spotify account and would have to start over from scratch. I’d not only have to recreate a staggering number of playlists, but I’d lose all my recommendations, access to friend’s playlists, and my podcast history. What happens if a social network shuts down? I mentioned earlier that Facebook, Twitter, LinkedIn, and Google all have large user bases, so it doesn’t seem likely that the big social networks would ever shut down. Especially given how entwined Facebook and Twitter have become with the Internet at large. However, Google recently announced that they would be shutting down Google+ sometime in 2019 due to a data breach. So what happens to any websites or services you signed up for using your Google+ account? As of now, while Google has explored what will change going forward, they haven’t laid out how this shutdown will impact people who use Google+ to sign in to various websites. All we can do is speculate, but there’s a good chance they’ll have to create new accounts for websites that don’t allow you to edit or change your sign-in information. What should you do instead? The best defense you have against password breaches and other attacks is to use complex and unique passwords for every site you visit. And yes, I know how hard it is to remember a different password for every single account, so that’s where a secure password manager like 1Password can help. Not only will 1Password safely and securely store your passwords for you, but it will also allow you to generate strong, unique passwords when you need them! You’ll even be able to update and change your login information whenever needed, without being tied to a single social media account. And as an added bonus? 1Password has never been hacked, so you know your information will stay safe and confidential, away from the prying eyes and grabby hands of identity thieves.
<urn:uuid:3987a3fb-5885-4907-b924-c795e5f76dbf>
CC-MAIN-2022-40
https://blog.1password.com/using-social-logins/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00314.warc.gz
en
0.937567
1,177
2.828125
3
By: Sudhakar Kumar, Sunil K. Singh Can you login to a computer without coming into physical contact with it, i.e., with your mind? With this process, a new biometric feature can be developed for authenticating a computer user in real-time, which involves brain waves. Or, can you control the security of a computer by changing the password with your thoughts? Sounds impossible, isn’t it? But it is actually possible through Brain Computer Interface, or BCI in short. So, what is BCI? A Brain–computer interface (BCI) is a framework that actions the movement of the central nervous system (CNS) and converts it into fake yield that replaces, re-establishes, upgrades, supplements, or further develops normal CNS yield, and in this manner changes the continuous communications between the CNS and its outside or inner environment . With the help of BCI technology, researchers have been able to control drones in the sky, improving the quality of life of older adults and elderly patients, and even entering very complex commands as an input. But is it different from human computer interaction? How does it work? To understand this concept, we need to understand brain waves and how we can calibrate them. Ever thought, what are these thoughts? The human brain contains hundreds of billion nerve cells called neurons, which are connected by synapses, trillions of connections. On an average, each connection transmits about one signal per second. Thoughts can be considered as electric impulses generated by neurons. These neurons release special chemicals known as neurotransmitters, which generate these electrical signals in neighboring neurons . To understand neuronal firing, we need to understand three parts of a neuron, which are the soma, dendrites, and axons. - Soma: It is the “brain” of the brain cell which processes information and determines its importance for transfer to other cells. - Dendrites: They are tree-like structures capable of receiving and gathering information from other neurons for delivery to the soma. - Axon: Important information is passed from neurons to neurons through axons (act as wires). The axon is insulated with a fatty substance called myelin, to keep electrical current strong and flowing directionally. The course of ordinary neuronal terminating happens as a correspondence between neurons through electrical impulses and neurotransmitters. If enough neurotransmitters are released, neuron firing is repeated in the next neurons. The propagation of thoughts depends on two things-how many neurons are firing and how often. How Brain Computer Interface works? The BCI works on the concept of these neurons firing by measuring the number of neurons firing event per second, i.e., the frequency of brain waves, and measuring the number of neurons involved, i.e., the amplitude of the brain waves. These changes in voltage are measured using electrodes attached to the top of your head or hooked up directly to neurons by drilling through the skull. Also, these waves are categorized into 4 types, namely alpha, beta, theta, and delta waves, which signifies different mental status according to frequency ranges and other features. With the help of these different categories of brain waves, BCI can measure human emotions like alertness, attention, focus and stress. Figure 1 illustrates the block diagram of BCI. Types of BCI: Brain computer interfaces can be further divided into three main groups: invasive, semi-invasive and non-invasive. Not to mention, these types have their own rewards and drawbacks. In invasive techniques, with the help of medical surgery, a special device is directly inserted into the human brain to capture brain waves. In semi-invasive, the special device is attached to the skull. In contrast to these two techniques, non-invasive means no device is attached to the human brain and is hence considered the safest way. Be that as it may, these gadgets can just catch “more fragile” human brain signals and flags because of the deterrent of the skull. The discovery of cerebrum signals is accomplished through electrodes put on the scalp. In non-invasive BCI, there are numerous advances taking place. For instance, EEG (electroencephalography), MEG (magnetoencephalography), or MRT (magnetic resonance tomography). Electroencephalography (EEG): It is a physiological technique for decision to record the electrical signal created by the mind through cathodes put on the scalp surface. Non-invasive EEG (electroencephalogram)-based BCIs are the most widely researched approach. Most BCIs were initially developed for medical applications . There are many instances in healthcare which exploit mind signals in totally related stages, including counteraction, location, conclusion, recovery, and reclamation. Smart homes, offices, and traffic can exploit this BCI, which will offer more luxury, physiological control, and, most importantly, safety. In amusement, a computer game called BrainArena where the players can join a collective or serious football match-up through two BCIs. They can score goals by envisioning left or right-hand actions and movements. Potential of BCI in the Future: You can measure its potential by the fact that in recent years, ‘Elon Musk’ announced a $27 million investment in Neuralink at its start, a venture to develop a BCI which can improve human communication with the help of AI. Neuralink has got over $205 million in funding since its establishment in 2016 . The Neurable company invented the “world’s first brain-controlled virtual reality (VR) game“. This BCI company recently raised $10 million to move beyond game making and to develop some next-generation real-world applications. And so on. Hill, N. J., & Wolpaw, J. R. (2016). Brain–Computer INTERFACE. Reference Module in Biomedical Sciences. https://doi.org/10.1016/b978-0-12-801238-3.99322-x Dougherty, E. (2011, April 26). “What are thoughts made of?” Mit Engineering. Retrieved September 14, 2021, from https://engineering.mit.edu/engage/ask-an-engineer/what-are-thoughts-made-of/. Kołodziej, Marcin & Majkowski, Andrzej & Rak, Remigiusz. (2010). Matlab FE-Toolbox – An universal utility for feature extraction of EEG signals for BCI realization. Przegl$pm$d Elektrotechniczny. 86. 44-46. Brain-computer interface hold a promising future. The Alliance of Advanced BioMedical Engineering. (n.d.). Retrieved September 14, 2021, from https://aabme.asme.org/posts/brain-computer-interface-the-most-investigated-areas-in-health-care-hold-a-promising-future. Shead, S. (2021, July 30). Elon Musk’s brain COMPUTER start-up raises $205 million from Google Ventures and others. CNBC. Retrieved September 14, 2021, from https://www.cnbc.com/2021/07/30/elon-musks-neuralink-backed-by-google-ventures-peter-thiel-sam-altman.html. Cite this article: Sudhakar Kumar, Sunil K. Singh (2021), Brain Computer Interaction (BCI): A Way to Interact with Brain Waves. Insights2Techinfo, pp. 1
<urn:uuid:934f46ba-fce4-42b5-a3df-76fb5b2da5a7>
CC-MAIN-2022-40
https://insights2techinfo.com/brain-computer-interaction-bci-a-way-to-interact-with-brain-waves/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00314.warc.gz
en
0.924814
1,626
3.609375
4
05 Mar What Is The Internet Of Things And Machine To Machine Technology? The internet revolutionized the way we live and work on a daily basis. From the moment we get up, to right before we go to sleep, and almost everything in between – we are rarely offline. Now, a new type of internet is about to revolutionize the way our machines work for us. The Internet of Things (IoT) and Machine to Machine (M2M) technology is an emerging field where everyday objects are interconnected, creating the ability to send and receive data between objects without the need of human interaction. With such enormous potential in terms of efficiency gains, Norscan has been focusing their research efforts on coming up with solutions for a wide arrange of applications. We caught up with Yolande Cates, who’s leading Norscan’s IoT/M2M research efforts, to learn more. What is Machine to Machine technology? In its most basic form, Machine to Machine (M2M) Technology is simply devices sending information between each other either in wired or wireless fashion. M2M has a long history in industrial automation in things like SCADA systems. Within the M2M/IoT world, this is generally an end device, such as a sensor, that autonomously sends information to another device that allows either a more complex system to take an action or the data is made available to a user who can then use that information. What is the Internet of Things? The Internet of Things (IoT) is used to describe devices that gather information and then use the internet to make that information available to others. Wearable technology, remotely programmable thermostats, and smart meters are examples of relatively sophisticated consumer devices in the IoT. What can be done with this technology? If you can measure it, it can be monitored in the IoT. Some common types of things being monitored are temperature, humidity, location, and energy use, but its potential is seemingly endless! In the medical field, the IoT is being used with heartbeat sensors, blood sugar monitoring, blood pressure, weight scales, and smart pill boxes; the data from each of these is pushed to the cloud and stored securely on a server. The data can then be accessed by authorized users. In many ways it is simply adding the communication layer to existing technologies. The idea is to make information flow more quickly to those who need it. For logistics, barcode scanners, RFID tags, and smart containers can work together to provide a real-time view of where everything is. In theory this should make things like just in time (jit) delivery smoother and more efficient. In agriculture, sensors can be used to monitor the temperature and humidity of grain bins, the level of fertilizer or fuel in a tank and even the soil composition. With this information available remotely, farmers can see trends over time and feel more secure leaving for a period of time. In addition to monitoring, a connected device can take action based on the information it is given. In the logistics and inventory example, the system could be programmed to automatically order more stock once inventory starts running low. What is Norscan researching and developing within this field? Norscan has a long history in the development of sensors and in wired communication in the telecoms sector. That expertise is being applied to new sensor types in different industries. Norscan’s has historically allowed users to access their sensor data remotely over an Ethernet connection or a Plain Old Telephone Service (POTS) phone line – this is the earlier version of the IoT. Bringing that same level of availability to a browser or mobile device is the next step in the evolution of Norscan’s product line. Why does it matter? For businesses, the IoT leads to the possibility of improved efficiency and potential new business models. The average consumer is beginning to see more and more IoT devices – smart thermostats and fitness monitoring being the first two that come to mind. As the IoT grows it is expected to impact people beginning in the home through home automation and throughout society. Cities are beginning to use IoT devices to dynamically adjust parking fees to maintain an availability level, for example.
<urn:uuid:b5b1c573-f681-4b61-89b1-f934502c917c>
CC-MAIN-2022-40
https://www.norscan.com/what-is-the-internet-of-things-and-machine-to-machine-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00314.warc.gz
en
0.924616
865
3.25
3
Big Data Safety Challenges Big Data, no longer just a buzzword or a promise of exciting things to come, is something that many businesses leverage daily in this modern age. With the advent of the digital age, high volumes of data are generated daily from the internet, social networks, healthcare applications, sensors, and various other sources. Big Data refers to datasets that are varied and complex to such a high degree that they cannot be manipulated with traditional data processing solutions. This explosion of diverse data is what businesses routinely use to create, fine-tune, and implement various plans and strategies. While things are still relatively at an early stage with Big Data, some challenges and issues need solutions going forward. A lot of these challenges have to deal with issues of privacy and security. Coping with these safety challenges can have a significant impact on the future use and potential of Big Data. Understanding Safety Risks At the outset, it is essential to understand some crucial characteristics of Big Data projects that can explain the safety challenges that can come with the territory. Here are some important points. - Projects can often involve components that are heterogeneous with no single standardized security scheme designed for that purpose. - Batch processing or online transaction processing systems have focused on the development of privacy and security methods. - Use cases often require employing multiple Big Data sources that were not intended to be used together, creating further possible safety vulnerabilities. - Increasing streams from sensors in IoT applications can create internet connectivity, transport, and aggregation vulnerabilities. - A lot of Big Data sources, like video imaging and geospatial data, were previously considered too large for analysis. - Big Data inherently magnifies issues with jurisdiction, provenance, context, and integrity. - Volatility becomes significant in scenarios where data is considered permanent. - While most practices and standards have traditionally operated within the framework of a single organization, Big Data opens up possibilities of sharing data at high volumes across different organizations. It is essential to recognize the safety challenges that can arise out of these factors and acknowledge that with the exponential increase in the generation of data, more and more of it would need to be protected. Challenges with Storage and Management Since the advent of Big Data, the scale of it has faced constant change. As the scale of data generation keeps increasing exponentially, an event like a leak can bring catastrophic consequences. The challenges of storing and managing Big Data come mainly from the appropriateness of the storage solutions. In many cases, this data is stored using solutions that employ a horizontally scalable, distributed storage platform. Mostly in cloud storage applications, solutions like Tachyon, QFS, HDFS, Ceph, GlusterFS can provide the storage volumes and scalability needed. However, this does not necessarily mean that they satisfy the security and concurrency requirements that are ideal. Furthermore, storing data in the cloud can itself present safety challenges. In many cases, data needs to be moved across cloud and local storage for processing, and this opens up opportunities for unauthorized access. The owner of any data is liable to lose control over the information when a cloud service is used for storage. The right storage solutions need to implement reliable means for the data owner to test the data’s safety and integrity, using solutions like checksums, digital signatures, trapdoor hash functions, message authentication codes, or Reed-Solomon codes. Challenges with Transmission and Sharing While large enterprises control a lot of Big Data, data is also frequently shared across businesses regularly. However, there is very little in terms of supervision or specifications regarding the sharing and use of that data. The self-discipline of enterprises often becomes the only determining factor behind safe practices. The sheer scale of Big Data can further compound safety challenges with long transmission times across possible vulnerable pipelines. Many enterprises process Big Data in place and transmit only the analytical results or classify the data into smaller parts and transmit only the data relevant for analysis downstream. For competitive advantage, companies can share business and individual data. This can increase the risk of disclosure of personal or proprietary information. The need of the hour can be reliable technology and a safe, secure environment. Overcoming Safety Challenges in Big Data There is a lot that organizations can do to identify key problem areas in Big Data safety and implement the appropriate solutions. The implementation of encryption can be a simple move that has far-reaching consequences with solving significant problems. Securing endpoints, networks, applications, and physical sites can bring the table an all-around approach to overcoming these safety challenges. A lot of data security technologies have also evolved with time, becoming more scalable and flexible in dealing with the requirements of Big Data operations. Here are some techniques worth taking a look at. - User Access Control –Traditionally, minimal levels of user access control have been used by many companies to account for high management overheads. With Big Data, however, user access control solutions have evolved to take a more policy-based approach. Through user-based and role-based settings, access can now be automated to a high degree. Multiple levels of user control and multilayered administration settings can be used to provide better granular protection of Big Data. - Intrusion Detection –The distributed nature of Big Data storage architecture can often make them particularly vulnerable to intrusion. Employing intrusion prevention systems can help minimize the chance of intrusions. Should an intrusion attempt still get through, intrusion detection systems can help detect the anomaly and quarantine the data to prevent impact. - Robust Encryption –Modern encryption solutions meant for use in Big Data scenarios can handle huge data volumes and protect whether the data is in rest or motion. They can also be employed to process structured, semi-structured, and non-structured data stored in relational and non-relational database management systems.
<urn:uuid:6517f7cf-324b-4f0c-a12f-439b2aeda377>
CC-MAIN-2022-40
https://www.protecto.ai/big-data-safety-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00314.warc.gz
en
0.925442
1,192
3.078125
3
Unauthorised access to personally identifiable information is one of the most pressing concerns facing both consumers and businesses in 2021. With data breaches presenting an array of consequences to businesses, and data subjects continuing to disperse sensitive data across the internet - the importance of protecting data is well and truly realised. As businesses continue to install strategies to prevent unauthorised access, as well as maintaining compliance with data protection laws, the topics of data privacy and data security continue to be ever-present. However, with no clear universal definition of data privacy - even within the European Union’s General Data Protection Regulation (GDPR) law, these two terms are often substituted for one another – leading to confusion, ambiguity, and waste of resources. In this two-part series, we examine the differences between data privacy and data security, as well as the core techniques currently employed in data security measures, the challenges that data privacy faces, and how our DataSecOps platform can help. Data Security vs Privacy: The fundamental differences. While data privacy and data security both serve the same goals – protecting sensitive and personally identifiable information – they both achieve this in distinctly different ways. Data privacy ensures that sensitive information is correctly interacted with, managed, retained, removed, and stored under data protection laws. Data security, on the other hand, ensures that sensitive data resides in secure locations. With security systems and techniques in place such as data masking and encryption. In doing so, data security strategies aim to reduce the chances of damaging cyber attacks and breaches occurring. Understanding data security Data security primarily aims to protect the personal data of subjects from any unauthorised access, attack, or exploitation. It accomplishes this by introducing other strategies and techniques to a business’s architecture. Many data security measures strive to eliminate the possibility of human error providing unwanted entry points into a secure framework - weaknesses that attacks such as phishing emails seek to exploit. In 2021, human error is responsible for an estimated 88% of data breaches, therefore limiting this potential is a necessity to businesses of any size. Some examples of these measures include: User Entity Behavioural Analysis These techniques are being realised as increasingly necessary as cyber attacks continue to target businesses of any size with equal intent. Last year, the probability of a small business being targeted by a cyber attack rose to 47%, while the average global cost of a data breach continued to rise to GBP 2.78 million. This emphasis on integrating additional security measures defines the role of data security within an organisation as being entirely separate from the role of data privacy. In our second part, we’ll examine more closely the role of data privacy, as well as the current challenges affecting the data privacy landscape and how our DataSecOps platform can help. To learn more, why not visit our blog for the latest insights and discussions on pressing data privacy issues.
<urn:uuid:44f8acb2-07c5-45b4-9824-c218a6341e4a>
CC-MAIN-2022-40
https://www.exate.com/post/data-privacy-and-data-security-understanding-the-difference-part-one
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00314.warc.gz
en
0.917014
606
2.96875
3
Plant pests and diseases that affect food crops can cause significant losses to farmers and threaten food security. Fera is supporting farmers to make informed decisions about their use of pesticides. The spread of pests and diseases has increased dramatically in recent years. Globalisation, trade and climate change, as well as reduced resilience in production systems due to decades of agricultural intensification, have all played a part1. Fera, formerly the Food and Environment Research Agency, is a joint private/public sector venture between Capita and the UK Government. It has been leading a project funded by Innovate UK on behalf of the Crop Health and Protection Centre (CHAP), to develop a new decision support service to assist farmers in making decisions about sustainable use of pesticides in farming. The new web-based platform, called CropMonitor Pro, will provide users with summary risk forecasts for all pests and diseases which are a threat to crops of wheat or oilseed rape during the season. Based on a sophisticated modelling framework using live weather data feeds and novel risk algorithms, the system will quantify daily risks of pests and diseases and provide guidance on whether a spray is required. In addition, the platform can provide support for decisions on optimal spray timing and generate lists of approved products available, based on the specific threats to the crop. Using well-timed sprays to control pests and diseases only when necessary – and the correct product and dose – will both increase yields and reduce pesticide use. This maximises food security while minimising environmental impact and risks of development of pesticide resistance. The CropMonitor platform also uses the latest in-field diagnostic technologies, such as automated spore capture and detection. These technologies provide very early warnings for pathogen spore incursion into crops before any infection or damage can occur. They also provide additional evidence to promote the use of well-timed sprays, which can then reduce the need for subsequent sprays. CropMonitor is also trialling new sensor technologies including a new WiFi-enabled soil moisture and temperature sensor, which can inform prediction of risks from soil borne pests and diseases and drought stress to crops. These devices also have a role in supporting risk assessments for flooding and run-off after rainfall which can lead to contamination of water sources with pesticide and/or fertiliser residues. Capita Green Week This year we’re holding Capita’s annual Green Week to coincide with the United Nation’s World Environment Day (5 June) – an international day that encourages us all to do something to take care of the earth. The theme for this year’s World Environment Day is ‘Air Pollution’, so during Green Week we aim to help improve our own understanding of air pollution and showcase some of the work we do with clients to reduce it, protect our environment and measure and monitor air quality.
<urn:uuid:42fef60c-5498-4775-9698-11792d219a8e>
CC-MAIN-2022-40
https://www.capita.com/news/supporting-farmers-minimise-environmental-impact-crop-production
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00314.warc.gz
en
0.939699
613
3.203125
3
Vulnerability management tools are designed to scan networks, computing systems, and software programs for exploitable weaknesses. Upon detection of weaknesses, the tool either suggests or initiates remediation actions. The goal is to reduce the potential for a successful cyberattack. Vulnerability management tools approach security differently than firewalls, anti-malware software, intrusion detection systems (IDS), and antivirus tools—these tools are built to manage attacks on the network as they occur. Vulnerability management tools, on the other hand, look for potential issues and fix them as needed to mitigate potential attacks. Vulnerability management tools assess the network using IP scanners, network and port scanners, and more. Next, these tools prioritize issues to ensure that the most critical weaknesses are fixed first, and suggest practical remediation steps. There are three common deployment models of vulnerability management tools: On-premise software programs Whatever the deployment model, most of these tools provide a web-based console that can configure the product to scan a range of IP addresses, web applications, or specific URLs. The broader the scan, the longer it will take to complete. Because vulnerability scanners have complex configuration, they typically come with preconfigured scan modes, which you can use as is or modified to your needs. You can also schedule automated scans on a regular basis. Vulnerability management tools typically perform two types of scans: Authenticated scans—access systems on the network without logging in, identifying issues like open ports, unsecure services, operating system and hardware versions. Authenticated scans—these require inputting credentials to the vulnerability scanner, and are more resource intensive, possibly affecting performance on scanned systems. These scans can provide more information about vulnerabilities, including those that affect logged in users. It is important to realize that vulnerability scanners are the most effective when run on a regular basis: The first scan should cover as many resources on the network as possible and should be as deep as possible. This establishes an initial baseline of vulnerabilities. The following scans can be less comprehensive, and can help show trends in different parts of the network or different types of vulnerabilities. After a remediation effort, it is important to rescan the resources to verify that vulnerabilities are really resolved. Like antivirus scans, the data gained by vulnerability scans is only useful when it is up to date. Most organizations should run at least a daily scan of vulnerabilities. Another important function of vulnerability management tools is that they enable active exploitation. Many of these tools let you not only identify vulnerabilities, but actually try to exploit it like a hacker would, in a safe manner and without disrupting operations. This can provide much more information about the extent of the vulnerability and its business impact. Features of Vulnerability Management Tools Here are common features you should look for in modern vulnerability management solutions. Dynamic discovery and inventory—ability to discover hosts and IT assets in traditional networks, cloud networks, containerized and serverless environments, and alert when new assets are created in the environment. Solutions should be able to identify device types, firmware, operating systems, ports, running services, and certificates. Vulnerability scanning—ability to scan any type of endpoint, including managed, unmanaged (bring your own device), cloud-based, internet of things (IoT), cloud-based resources, and containers. Advanced solutions can scan common business applications for vulnerabilities and configuration weaknesses. Identify risky assets—ability to scan the network perimeter, virtual machines, cloud environments, and containerized applications for vulnerable access and entry points. These could include web servers, unsecured hosts, and network devices. Identifying unpatched systems—ability to identify which systems on the network do not have all the necessary security updates applied. Prioritizing vulnerabilities—ability to map out the network, indicate where vulnerabilities are discovered, their CVE status, the severity and business impact of each vulnerability in each asset, and provide remediation instructions. Support for specific attack vectors—protection against important threat vectors such as phishing, ransomware, zero day attacks, supply chain attacks, and fileless attacks. Real-time monitoring and analysis—continuous monitoring and alerting when new vulnerabilities are discovered in any attack surface. Artificial intelligence and machine learning (AI/ML)—analyzing data to detect anomalous configuration changes and system behavior that may not match a known vulnerability, but may expose the system to threats. AI/ML is also used to analyze threat intelligence sources and use them to discover additional vulnerabilities. Remediation support—at a minimum, providing actionable guidelines for remediating vulnerabilities. Advanced solutions can support auto-remediation by applying a patch, isolating a vulnerable system, or integrating with other security systems such as firewalls and patch management. Top 5 Vulnerability Management Tools Nmap is an open-source vulnerability scanner, which can rapidly scan entire networks, and identify routing configurations, firewall rules, port and services configuration. Nmap is a bit difficult to use—its primary interface is a command line and it has no visual UI. A major advantage of Nmap is that it lets you run custom scripts to scan for specific issues in your environment. Main features include: Advanced network mapping—handles IP filters, firewall rules, routers, and other network equipment. TCP and UDP port scanning—scans all ports on the network to identify security issues. Large community—supported by a sizable open source community with an active Facebook page and Twitter channel. Covers most platforms—works with almost all operating systems including Linux, Windows, macOS, FreeBSD, Solaris, IRIX, and HP-UX. ThreatMapper is another open-source vulnerability management tool that identifies vulnerabilities and bugs in running hosts, virtual machines, containers, container images, and repositories. It supports cloud environments, Docker, and Kubernetes. ThreatMapper provides advanced vulnerability prioritization, letting you filter vulnerabilities by risk of exploitation, attack technique, attack surface, and other criteria. Main features include: Broad vulnerability database—uses data from multiple CVE and CVSS repositories. Visual UI— provides a graphical console that lets you view machines, VMs, and containers, perform on-demand scans, and view vulnerability scoring. Custom-built sensors—provides probes that can collect vulnerability data from Kubernetes, virtual machines, bare metal machines, and cloud services like Amazon Fargate. OSPd is a command-line-based system that lets you develop your own vulnerability scanners using scripts. It is highly customizable and uses the Open Scanner Protocol (OSP). Deployment requires Python 3.4 or higher and multiple dependencies. Main features include: Leveraging existing scanners—download scanner wrappers from open-source repositories. Writing new scanners—lets you write new scanner wrappers and deploy them to your environment. Watchdog is not a single solution, but a combination of several open source security tools. You provide a list of domains or IPs, and the solution can identify open services and ports for all the endpoints it can find. It then maps this information to a CVE database to identify vulnerabilities. Main features include: Performs fast network scans for hundred of domains, IP addresses, or IP ranges Leverages multiple open source web application vulnerability scanners: Nmap, Google Skipfish, Wapiti, BuiltWith, Phantalyzer, and Wappalyzer Analyzes the technology stack analysis or each target system to see if it has known CVEs Leverages multiple vulnerability databases including NVD CVE, CWE, CAPEC, D2SEC, and MITRE Reference Key/Maps Wireshark lets you analyze network traffic, capturing packet data and allowing you to visualize it in a graphical interface. It is very useful in examining and resolving security issues related to attackers probing the network from outside, or already inside the network. Main features include: Supports multiple network protocols including Ethernet, ATM, and token ring. Lets you filter and analyze traffic data flexibly, and supports import and export. Provides powerful command-line switches that let you define what network data you want to capture. Built in encryption/decryption for inspection of secure channels. Support for common compliance reports. Performs ongoing monitoring of networks and servers and sends notifications. Enables developers to automatically add modelines to files. Vulnerability Management with Cynet Cynet 360 is the world’s first Autonomous Breach Protection platform that natively integrates the endpoint, network and user attack prevention & detection of XDR with the automated investigation and remediation capabilities of SOAR, backed by a 24/7 world-class MDR service. End to end, fully automated breach protection is now within reach of any organization, regardless of security team size and skill level. Cynet provides vulnerability assessment, identifying vulnerable systems and apps that expose environments to exploitation. Maintaining patching routine reduces this exposure, preventing attackers from using most known exploits. Cynet enables easy discovery of unpatched vulnerabilities and prioritize the severity of vulnerabilities. In addition, Cynet 360 provides a range of security capabilities to secure modern IT environments. XDR Layer: End-to-End Prevention & Detection Endpoint protection – multilayered protection against malware, ransomware, exploits and fileless attacks Network protection – protecting against scanning attacks, MITM, lateral movement and data exfiltration User protection – preset behavior rules coupled with dynamic behavior profiling to detect malicious anomalies Deception – wide array of network, user, file decoys to lure advanced attackers into revealing their hidden presence SOAR Layer: Response Automation Investigation – automated root cause and impact analysis Findings – actionable conclusions on the attack’s origin and its affected entities Remediation – elimination of malicious presence, activity and infrastructure across user, network and endpoint attacks Visualization – intuitive flow layout of the attack and the automated response flow MDR Layer: Expert Monitoring and Oversight Alert monitoring – First line of defense against incoming alerts, prioritizing and notifying customer on critical events Attack investigation – Detailed analysis reports on the attacks that targeted the customer Proactive threat hunting – Search for malicious artifacts and IoC within the customer’s environment Incident response guidance – Remote assistance in isolation and removal of malicious infrastructure, presence and activity Cynet 360 can be deployed across thousands of endpoints in less than two hours. It can be immediately used to uncover advanced threats and then perform automatic or manual remediation, disrupt malicious activity and minimize damage caused by attacks.
<urn:uuid:35c2edb5-4d54-4100-acbf-c88fb63a5a34>
CC-MAIN-2022-40
https://www.cynet.com/initial-access-vectors/top-5-vulnerability-management-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00314.warc.gz
en
0.901777
2,214
2.703125
3
Small cell technology will play an integral role in the deployment of 5G networks. While macro cell deployments have rolled out to support consumer use cases, small cells will be deployed to support use cases for entertainment complexes, manufacturing facilities, healthcare organizations, and other enterprises. A recent study estimates that there will be 1.56 million private 5G small cells deployed by 2027, with the majority of deployments coming from enterprises. But what is a 5G small cell and how can enterprises use them to support new use cases and ensure coverage and capacity where they need it most? Here is a brief introduction to small cell technology and its potential applications to ensure interested enterprises leverage the timely opportunity to develop a strategy for accelerated deployment. What is Small Cell Technology? A small cell is a miniature wireless network base station with a low radio frequency power output and range. These nodes utilize radio equipment to receive and transmit data to smaller areas using low-, mid-, and high-band spectrum. Small cells deployed in 5G networks receive their connection from traditional cell towers before transmitting data from one cell to another in a relay, ensuring signals carry over large distances. The small antennas are highly directional and leverage a technique called beamforming to direct coverage to specific areas around the site. Small cells require a power source and backhaul - fiber, wired, or microwave - to connect with 5G networks. As the name suggests, small cells are relatively small in size as they don’t require much power and can be deployed for indoor or outdoor use in a licensed, shared, or unlicensed spectrum. Due to their size, they are easy to conceal and camouflage to blend with the natural environment. Base stations can be mounted on walls for indoor applications and placed on streetlights and utility poles for outdoor applications. Compared to large conventional cell towers, small cells are quick to install and are more affordable. Small cell networks (SCNs) will play a key role in 5G infrastructure as the technology is crucial for network densification and ensuring superior coverage and signal penetration. In the 5G era, cell towers will be restricted to lower-level frequencies. SCNs will be used to support high-frequency millimeter wave transmissions and to complement the macro network by filling coverage gaps and adding targeted capacity. What are the Different Types of 5G Small Cells? Businesses need to be aware of the different types of 5G small cells as they have different coverage limits, power levels, and use cases. Compared to previous generations, small cells in 5G networks will be deployed in a wider range of scenarios with varied architectures. Here are the three main types of small cells. Femtocells - Used primarily as a low-cost solution to enhancing in-building coverage. Femtocells have a coverage range of 30-165 feet, support 8-16 users, and use a wired or fiber backhaul connection. Picocells - Used to provide extended indoor and outdoor coverage for small enterprises such as higher education, offices, hospitals, and shopping complexes. Picocells cover 330-820 feet, support 32-64 users, and use a wired or fiber backhaul connection. Microcells - Used to provide coverage to a targeted area such as hotels, malls, and unique spaces within transportation hubs. Microcells cover up to 1.5 miles, support 200 simultaneous users, and use a wired, fiber, or microwave backhaul connection. Microcells are more expensive than femtocells and picocells. What are Potential Use Cases for 5G Small Cells? From urban densification to extending private networks within enterprises, SCNs have many potential applications as they are ideal for use in places where cell towers can’t reach. They provide businesses an opportunity to harness the power of a 5G network with customization and precision for indoor and outdoor environments. Here are several use cases to provide businesses a window into how they can benefit from 5G small cell deployment. Enterprises and commercial facilities like business parks, waterfront developments, office buildings, apartment complexes, and shopping malls can rely on SCNs to deliver reliable and secure in-building cellular connectivity for voice and data. When commercial facilities deploy their own 5G SCN, they can keep their data readily accessible and on-premise, ensuring a higher degree of security. In terms of cost, SCNs are a cost-effective workaround to 5G deployment, allowing businesses to forgo the need to build expensive macro sites. Entertainment Complexes and Venues Large venues have a host of connectivity challenges as they regularly experience significant demand peaks for a short period of time. If an entertainment complex needs more capacity as network demand grows, they can pinpoint where coverage is needed and then deploy additional nodes strategically to provide improved targeted coverage. 5G SCNs will help enable Industry 4.0 capabilities. Industry 4.0 refers to the 4th industrial revolution which focuses heavily on automation, interconnectivity, machine learning, and real-time data. Manufacturing facilities with 5G capabilities have the capacity and latency requirements to support use cases including real-time asset tracking, robotic factories, autonomous guided vehicles, and more. The healthcare industry can deploy a SCN to alleviate several major pain points including secure access to patient records and coverage for isolated indoor areas. Small cell-supported networks are secure and reliable, making them ideal for organizations that handle highly sensitive data. Get 5G-Ready with a Reliable Small Cell Solution Small cells promise a cost-effective solution for filling coverage gaps, increasing bandwidth, and getting networks ready for 5G without the need to build more expensive macro sites. Business owners and facility managers that understand the underlying technology and the potential use cases of 5G small cells are better positioned to partner with a premier integrator for rapid deployment. As the importance of SCNs continues to become more apparent, businesses need to proactively assess emerging deployment opportunities and develop a strategy for implementation to position their organization for the future of communications technology.
<urn:uuid:b90bc3ce-b94c-4747-9f31-fed968f520bb>
CC-MAIN-2022-40
https://www.anscorporate.com/blog/use-cases-for-5g-small-cell-deployment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00314.warc.gz
en
0.931373
1,221
3.34375
3
As we said in part 1 of the “Edge to Core to Cloud” series, fully integrated infrastructure is a work in progress that’s emerging through adoption of foundational technologies. But by examining particular use cases, we can easily shine a bright light on just how necessary, and advantageous, integrated infrastructures could be. Let’s dwell on AI for a moment. Artificial intelligence solutions and deep learning algorithms rely on massive datasets. For example, autonomous cars can collect hundreds of terabytes per day. These datasets originate at the edge, where they’re partially processed, crunched, and shrunk. Then limited datasets are transferred to the core or cloud, analyzed, and often used for training and inference before being archived. If an organization has disparate, disconnected, incompatible pools of storage, AI processes are blocked. That’s why AI applications that deliver business value benefit from a comprehensive, seamless architecture with data management that extends across edge, core, and cloud environments. By minimizing tedious data housekeeping and custom code for multiple APIs and integrations, data scientists can move forward faster. Preparing for the upcoming “data explosion” There’s another advantage of an integrated storage infrastructure. It allows you to better cope with the data onslaught you’re experiencing. We mentioned that autonomous cars can produce hundreds of terabytes of data per day, and they’re hardly the only workload that puts pressure on storage infrastructure. Many organizations already find that their storage demands are spiraling out of control. This problem is exacerbated by disconnected pools of storage, often containing duplicate terabytes of information. Not every storage technology is suited for rapidly increasing data. Data reduction technologies will be key at any point in the data stream. Edge computing environments already benefit from data reduction technologies like edge gateways. Storage technologies like compression and deduplication, especially if they span multiple platforms or places, become a boon to addressing data growth. It’s important to understand how your existing technologies can or can’t address your data growth. Some solutions—like throwing additional cloud storage at the problem—aren’t sustainable long-term approaches, because costs get out of control. It’s also important to evaluate emerging technologies that can help. Storing and managing huge volumes of data To begin coping with the amount of data that will result from edge computing, understand that everything you’re doing is to attain agility. Storing and managing your data must serve the needs of customers, who expect low latency, high throughput, and distinctive services. Anything that blocks agility has to be rooted out. Some of the principles behind agility include: - Automation. You can’t sustain manual approaches to provisioning, data migration, data protection, and other functions when infrastructure stores hundreds to thousands of terabytes. The more self-diagnosing, self-healing, and self-restarting a storage system can do, the more likely it is to be the right fit for a massively scalable collection of datasets. - Availability. Availability has to be a given: 99.9% uptime might not be enough when your customers want their financial records now. - Management. As storage infrastructure expands, integrated visibility and control make a huge difference. Sometimes, especially at the edge, it’s difficult (or impossible) to make changes to a storage system locally. Control over dozens or hundreds of different sites becomes a necessity. - Security. Any storage technology must be secure. Breaches happen every day and threats are constant. Most mature storage technologies used in today’s environments are built with encryption, access control, and secure multitenancy capabilities. - Consistency. It’s impossible to be agile without consistency. Consistent features help keep a service functioning predictably in any location. Consistent programmability lets developers write an application once and have the flexibility to deploy it anywhere, with the same performance. Consistent security keeps data safe regardless of location. Consistent availability means that an edge failure doesn’t affect core applications. And consistent automation improves data movement and data protection across the entire infrastructure. Organizations are looking for a proven approach to storage that spans many requirements without compromising on essential capabilities. Using an excellent storage technology that runs wherever you need it—on vendor platforms, hyperconverged infrastructure, virtual machines, cloud resources, and in containers—offers you more flexibility in deploying the right storage in the right place at the right cost. This approach translates into accelerated development, better efficiency, and improved cost controls. How to realize your vision of a full integrated infrastructure In part 3 of this series, we’re going to explore what NetApp® has done to make the integrated storage infrastructure vision a reality. We’re putting our efforts into rolling out a leading-edge approach for integrated storage infrastructure that spans core to edge to cloud—powering new approaches to thrill customers without compromising the bottom line. You can learn more by visiting our all-things-integration hub.
<urn:uuid:3c863209-2bee-4057-bfc8-ac742bbe17fa>
CC-MAIN-2022-40
https://www.netapp.com/blog/practical-application-of-fully-integrated-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00314.warc.gz
en
0.902946
1,042
2.515625
3
How the DevOps take Advantage of Artificial Intelligence The name “DevOps” was coined after the two departments formerly operating in silos. Using the everlasting loop in DevOps is a good way to highlight how the many phases of the process are connected. Even though the project seems to be progressing sequentially, it is essentially a metaphor for the need for constant collaboration and iteration. Specialized skills and technology must be employed throughout the DevOps lifecycle for both improvement and operational tasks. Constant group communication is the only way to maintain alignment, speed, and quality. Several aspects of the DevOps lifecycle include continuous integration and deployment (CI/CD) and the reaction to continuous input. An organization’s goal for change and its general environment need specialized teams with specific structures to achieve DevOps. DevOps Is the Next Generation of Artificial Intelligence Robots are now capable of self-improvement and performing human-like tasks thanks to artificial intelligence (A.I). Deep learning and natural language processing are used in several cases of artificial intelligence. Computers can be trained to do specific jobs by analyzing massive data and identifying patterns. This is how DevOps take advantage of artificial intelligence: When it comes to software development, DevOps can be implemented into machines to boost their performance. Our being is affected by A.I. in significant ways. When it comes to us, this means first identifying the inefficiencies that are blocking us from moving forward and then finding innovative solutions to those inefficiencies. Because of today’s cutting-edge technology, yes. Methodologies for software development are being transformed by advances in artificial intelligence and machine learning. With the support of A.I. and DevOps teams, the software can be tested, built, delivered, and exhibited more rapidly. A.I. has the potential to increase automation, solve issues quickly, and improve teamwork, to name a few. DevOps team can assist in recognizing and fostering creativity and innovation by enabling them to handle data volume, velocity, and unexpected nature. DevOps professionals may make it simpler for A.I. to plan, test, deploy, and monitor its progress. The A.I. suggestions for new and inventive approaches to boost process efficiency may assist DevOps teams in speeding up software development and better managing software projects. With DevOps, structures have a longer useful life, and the ability to handle the frequent addition of new features, repairs, and upgrades that come with running a business may be improved. Effects of Using A.I. To Improve DevOps A.I. structures may now be built with little or no human interaction. Since they work in a rule-based and human-controlled environment, the DevOps team may tremendously benefit from automated solutions. It is conceivable that hard efforts will significantly influence the effectiveness of DevOps. Shorter development and operating cycles may be used with user-friendly features to improve overall performance. Among the most common markers of success in DevOps are decreased burn rates, fewer errors, and shorter time to market. Hardware integration and deployment are also included in DevOps. To be of any value, metrics like the number of connections and the time elapsed between them must be assessed and linked. DevOps may benefit from the use of artificial intelligence in several ways. Human activity can only go so far and become complicated before being replaced by artificial intelligence. AI structure with a higher level of expertise in this area can build better operational limitations. Structures for artificial intelligence exist. Artificial intelligence (AI) may provide investigators with a more comprehensive picture of events. An engineer can see essential data from a specific piece of equipment. In order to manually connect and evaluate data, engineers are now needed to move between a wide variety of devices. When it comes to complex operations like sorting alerts, finding the root causes, and investigating unusual activity, time and data are required. Following a set process might make your search for information much more accessible. Incorporate AI Into Your Business Plan In order to stay relevant in today’s market, you may like to consider incorporating A.I. into your business plan. Artificial Intelligence (A.I.) has become a need for the DevOps team as a consequence. Governments throughout the globe are concerned about A.I.’s potential role as a driving factor in the virtual economy. With the DevOps paradigm, agencies may access cutting-edge technology with agility and variety because of the passionate relationship between the improvement team and I.T. operations. A complete redesign or cutting-edge technology may be the only solution if there is a lack of understanding of DevOps and its implementation. When DevOps is correctly applied in a team, new frameworks, control ideas, and positive generation tools are needed. Our existing software development team is now a DevOps development team, thanks to the explosion of DevOps adoption. However, the benefits of DevOps cannot be realized entirely, even with new technology or a committed team. Some believe that DevOps brings together the efforts of the development and operations teams in a single endeavor. A software development team can produce, test, and release their work more quickly and reliably than they ever could without it with DevOps. Because of this, operational efficiency is strongly tied to the team’s ability to meet uptime goals and keep the number of issues resolved to a minimum. Time to market and deployment may be shortened by learning about the improvement team’s preferences.
<urn:uuid:1d5a13c7-abfe-478a-8abf-a660c7865e92>
CC-MAIN-2022-40
https://enteriscloud.com/devops-take-advantage-of-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00514.warc.gz
en
0.944529
1,119
3.015625
3
Risk Management is the process of identifying, analyzing, assessing, and communicating risk and accepting, avoiding, transferring or controlling it to an acceptable level considering associated costs and benefits of any actions taken. This includes: 1) conducting a risk assessment; 2) implementing strategies to mitigate risks; 3) continuous monitoring of risk over time; and 4) documenting the overall risk management program. Related Terms: Enterprise Risk Management, Integrated Risk Management, Risk Source: DHS Risk Lexicon and Adapted from: CNSSI 4009, NIST SP 800-53 Rev 4 If you would like to learn more about this topic, watch this short video: CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:5f00bd6b-2e48-4bac-b45b-198cfcb9c445>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/risk-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00514.warc.gz
en
0.893846
259
2.765625
3
Are you looking for a free way to speed up your internet and gain some extra privacy in the process? Keep reading, because Cloudflare (the Web Performance & Security Company) is offering a free new DNS service. And it helped me improve the speed of my DNS lookups. What is DNS? DNS is short for Domain Name System. It is an internet protocol that allows user systems to use domain names/URLs to identify a web server rather than inputting the actual IP address of the server. For example, the IP address for Malwarebytes.com is 18.104.22.168, but rather than typing that into your browser, you just type ‘malwarebytes.com,’ and your system reaches out to a ‘DNS Server’ which has a list of all domain names and their corresponding IP address, delivering that upon request to the user system. Unfortunately, if a popular DNS server is taken down or in some way disrupted, many users are unable to reach their favorite websites because, without the IP address of the web server, your system cannot find the site. When trying to explain the concept of DNS name resolution, I think that finding a phone number for a certain person is a good analogy. There are several ways to find a person’s phone number and the same is true for resolving an IP address that belongs to a domain name. Which DNS servers am I using now? If you have to ask yourself that question, there’s a big chance that you are using the DNS service provided by your internet provider. And while some of those are quite good, others are deplorable. Those that have looked into changing their DNS servers have probably ended up using Google’s public DNS, or if they were also interested in a web filter, they might have ended up using Cisco’s OpenDNS. IMHO those are the two most popular alternatives for the ones provided by ISPs around the globe, but many more are available. Why would I change to Cloudflare’s? We are not saying you should, but their claims sound very promising. Even if the differences in speed and privacy are not directly noticeable, you may be convinced by these arguments: - Cloudflare’s service is 5 times faster than the average ISP’s (8 milliseconds compared to 70). - ISPs do not always use strong encryption on their DNS or support DNSSEC, which makes their DNS queries vulnerable to data breaches and exposes users to threats like man-in-the-middle attacks. - Many companies collect data from their DNS customers to use for commercial purposes. Cloudflare promises not to mine any user data. Logs are kept for 24 hours for debugging purposes, then they are purged. - Query name minimization diminishes privacy leakage by only sending minimal query names to authoritative DNS servers. That last one may need some explanation. The less information the DNS servers send to each other to resolve your DNS query, the smaller is the amount of data that would be revealed in case of a leak or breach. This is why servers that use this method only send each other the minimum of information that the receiving server needs. How to change your DNS servers? The method to change your DNS servers depends very much on the level at which you want to change them and on the operating system you are using. If you have tried the DNS service and decide that you like it, it might be advisable to change the DNS servers at the router level, so you don’t have to do it for each device separately. To do this successfully your computers and devices need to be set up for DHCP, or they will not even look at the router for DNS information. Lifewire published a guide for the most common routers that might prove to be handy. For mobile devices be aware that they will change DNS servers when they are no longer using your router. At the device level, the OS is the deciding factor on how you can change the DNS servers. - How to Change DNS Servers in Windows - How to Change Your Mac's DNS Settings - Change Your DNS Settings on iPhone, iPod Touch, and iPad - How to Change the DNS for an Android - Change DNS settings on Linux Testing the difference To check whether it would be a possible speed improvement for you to switch DNS service you can use a free toll called NameBench. Background information: the NameBench tool is offered by Google and was launched around the same time that Google started offering their free DNS service. NameBench can be downloaded from Google Code – there are suitable versions for several operating systems - and after installation, you can specify the DNS servers that you would like it to test. - Google Public DNS: 22.214.171.124 - Cloudflare DNS: 126.96.36.199 - OpenDNS : 188.8.131.52 It does help to set “Your location,” but my laptop travels a lot, so I skipped that. Then “Start Benchmark” and be patient for a while, because it may take a few before the application is done testing (it took almost half an hour on my laptop). The results will have a layout similar to this one: While your results may be very different from mine, you can tell that it can definitely pay off to do this test if you are looking for a speed improvement. So, a speed improvement of 13.5 % and a promise of added privacy. What am I going to do? Well, at least I’ll try it for a while to see if it makes a real difference. And note that I already was using an alternative for the DNS service of my provider, which was terrible, to begin with. For most internet users it is worth looking into which DNS service works best for them. Be it for speed improvement or some of the added benefits that some of these DNS services have to offer, like additional privacy or parental controls. But most will keep on using the ones provided by their ISP provider because they just can’t be bothered or find it too complicated to change the settings. We do our best to encourage our readers to make informed choices and decide for themselves who they want to trust with the data that can be derived from DNS lookups.
<urn:uuid:fadcc075-fb78-4604-8e00-1e6ce6d7de85>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2018/04/cloudflares-new-dns-service
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00514.warc.gz
en
0.945347
1,310
2.78125
3
Using the Image Recognition command Use this command to search for an image within a source image. - Double-click or drag the command to the Task Actions List pane. Select the source image file from a folder or capture it from an application This image can be standalone or contained within another image that is captured dynamically at run time. - Select Show Coordinates to capture and view the coordinates of the target image within the window. - Specify the wait time (in milliseconds) in the Wait field. Select or capture the image that you want to click upon during play time in You can capture the image from an an application window or select it from a File.If you are using the command for a window, you also have the flexibility to position your click location relative to an image. This is useful when the target image is blurred, has some background noise, or the target image is visible multiple times. Select Image Occurrence when the target image can be found multiple times. You can insert a variable when you do not know the number of times the image might appear on the screen. Ensure you assign variables that support numeric values. Select a click option: - Left Click - Right Click - Specify match percentage and tolerance. Select one of the methods of comparison. - Monochrome with threshold - Optionally, select the Quick Test button to see the output without running the entire test. - Click Save.
<urn:uuid:c5d4b964-79c8-4bd9-801e-fecef3f15b36>
CC-MAIN-2022-40
https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/using-image-recognition-command.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00514.warc.gz
en
0.825028
338
2.75
3
In a fast-moving world, system speed and efficiency is absolutely essential. We may take the ability to multitask effortlessly on our devices for granted, but this isn’t always a given. An overloaded RAM can result in sluggish system performance which leaves users frustrated and unable to switch seamlessly between applications. Whilst the temptation may be to attempt completely eliminating your RAM usage all together, this shouldn’t be your aim. Remember, RAM is there to be used. Instead, you should be seeking out and getting rid of unnecessary drains on your RAM, so that your system can work at peak efficiency. This article takes you through 11 effective ways to decrease your RAM usage, so you can get the most out of your device. What is RAM? Before we take a look at how to decrease your RAM usage, it’s helpful to know exactly what RAM is. RAM stands for ‘Random Access Memory’ and is your computer’s short term memory. The reason it’s called random access is because you can access the stored data in any order rather than sequentially. RAM is a temporary storage space where your computer stores data and information that you’re actively using for easy and quick access. RAM is the key to allowing you to multitask seamlessly across different functions and applications simultaneously. For this reason, RAM is involved in most of your computer’s functionality. This includes loading apps, browsing the internet and more. RAM is the opposite of your hard drive or SDD, which are your computer’s long-term memory. When you turn your computer off, anything stored in the RAM is wiped. How does RAM affect my computer performance? As a general rule, the more RAM you have, the smoother and faster your computer will run. In fact, RAM is one of the biggest factors that can negatively impact your computer’s performance. If you find that your device is running slower than you’d like, then there are a few things that you can do to try and decrease your RAM usage. 11 ways to reduce your RAM usage Everything from open apps, junk files, and unnecessary background processes can drain your RAM and cause your computer to run slowly. Here’s 11 things that you can do about it: Turn your device off and on The oldest trick in the book, and often the most simple and effective. Restarting your device will automatically clear your RAM, speeding up performance. Bear in mind, however, that this won’t increase your RAM’s overall capacity. However, it’s good practice to regularly restart your computer to keep things running smoothly and make sure that your RAM isn’t getting clogged up with unnecessary data. Check which programs are draining your RAM Working out which programs are hogging your RAM doesn’t have to be a guessing game. Simply open up your Windows Task Manager or Mac Activity Monitor and you’ll be able to see which applications are the major culprits. You may find that an app you don’t actually use is running in the background but still using up your RAM. Cut down background apps If you discover that some applications are running silently in the background but are still using your RAM, you may want to consider uninstalling them. If you do occasionally need that application, make sure ensure that it’s not set to open at startup, so that it doesn’t continue to run in the background whenever you use your computer. Use less resource-instensive apps Although not always possible, using lighter apps that don’t require as much RAM is a great way to decrease your RAM usage. This might mean opting for a lightweight application that offers the same functionality without being so resource intensive. For example, Photoshop is notorious for being highly resource-intensive. If you’re able to opt for a lighter program, this might benefit your computer’s overall performance. Close apps when you’re not using them Another quick and easy fix is to close and quit applications when you’re not using them. Being more stringent about which applications are running is good practice to free up your RAM. For example, we’re all guilty of leaving a browser tab open for later reference. Instead, why not bookmark the page and close the tab. Google Chrome is known to use significant RAM, so if you’re looking to reduce your RAM usage, switching to a different browser can be a quick and easy fix. Another tip is to remove any browser extensions that you don’t use. These are a drain on your RAM which are easy to get rid of. Clear your cache Your cache holds on to information, using RAM, allowing you to reload internet pages that you’ve visited previously. Although this means that you don’t have to download the page information again, thus saving you time when you’re browsing, it also uses up RAM. Clearing your cache can free up vital space and speed up your device. Keep your software updated For obvious reasons, ensuring your software is up to date is good practice generally. However, in terms of RAM, it’s essential because out-of-date apps can suffer from memory leak which monopolizes your RAM. Check for malware If you’ve made adjustments and are still finding that your RAM is at full capacity, then consider checking your device for malware. Fileless malware exists in your RAM and can consumer a large portion of your device’s memory capacity. If you detect malware in your system, you can take the necessary steps to eliminate it from your device. Adjust virtual memory If you’re encountering notifications telling you that your computer is running low on virtual memory, you can easily address this and relieve pressure on your RAM. On a Windows device, find virtual memory under your advanced settings and manually set your initial and maximum size. If you’re on a Mac, navigate to the Memory panel in your Control Panel to make the same adjustments. Install more RAM If you’re looking to actually increase the overall capacity of your RAM, you can actually purchase replacement RAM and physically install it into your system. As long as you do your research, it’s not too difficult. RMM Software, PSA and Remote Access that will change the way you run your MSP Business See Atera in Action RMM Software, PSA and Remote Access that will change the way you run your MSP Business
<urn:uuid:16129bfa-6884-4ac3-83af-6ce3aafb8a12>
CC-MAIN-2022-40
https://www.atera.com/blog/how-to-decrease-ram-usage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00514.warc.gz
en
0.922508
1,348
2.765625
3
For a stronger and more effective cybersecurity strategy, you may want to take advantage of data enrichment. Many businesses now include data enrichment as part of their overall cybersecurity strategy. They still use traditional methods to identify and prevent cyber threats. With data enrichment, though, they are better protected against cyber threats. Data Enrichment Defined Data enrichment is the process of supplementing existing data with new data. The new data “enriches” the existing data. Data enrichment is commonly used for marketing purposes. Business-to-business (B2B) service providers, for instance, will often harvest new data about prospective customers to enhance their existing data. But data enrichment isn’t limited to B2B marketing. You can use data enrichment for cybersecurity purposes as well. You can use data enrichment to detect cyber threats. Threat detection encompasses apps, tools and software that specifically look for potential threats on a computer or network. Most forms of threat detection involve the use of data. And you can enhance this data with new data. Another cybersecurity-related application for data enrichment is distributed denial-of-service (DDoS) protection. DDoS attacks can overwhelm your business’s network. They will consume bandwidth and system resources while causing performance issues such as slow speeds or frequent disconnects. All DDoS attacks involve traffic from a variety of devices. The devices are typically hijacked by a hacker and then used to carry out the DDoS attack on a network. With data enrichment, you can distinguish between legitimate devices and hijacked, DDoS-programmed devices more easily. Data enrichment can help you protect your business’s network from DDoS attacks. Failure to identify malware in a timely manner could spell disaster for your business’s information technology (IT) infrastructure. Depending on the type of malware, it may delete storage drives, lock your files or perform other malicious activities. Data enrichment, however, can help you identify malware. Different forms of malware have different signatures. A signature is a digital footprint. When you install antivirus software on your computer, it will look for known signatures. But antivirus software is only effective if it knows the signature of the malware that’s infected your computer. Data enrichment, though, can add new signatures to the antivirus software’s database. Data enrichment is a relatively new trend in cybersecurity, It involves the use of new data with which to supplement existing data. Including data enrichment in your cybersecurity strategy will provide greater threat detection, DDoS protection and malware identification.
<urn:uuid:0a252e03-d47e-451b-a8a4-231b1f61d3d2>
CC-MAIN-2022-40
https://logixconsulting.com/2022/08/25/how-data-enrichment-can-be-applied-to-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00514.warc.gz
en
0.898162
533
3.125
3
For most computer owners, experiencing that sudden blue screen in their workstations may be a sign of worse things to come. Programming conflicts, missing or corrupt files or infected system resources are sure to be the immediate thoughts that would come into mind. No system is full-proof. Everything would indeed come to a point where the need to address such situations is evident. Formatting, re-programming and re-installations are alternative courses of action. There may be some good ways to refrain from a total wipe-out and clean installation of operating systems and programs but this would entail the expertise of seasoned technicians as well as broader understanding of why blue screens occur. Taken into consideration, a need to check on the problem persists. Users will not be productive every time this would appear. The best way is to identify the problem through the web or by testing hardware and software functions part by part. Tracing it will evidently lead to feasible solutions for the workstation concerned. [tags]blue screen, computer errors, operating systems, system errors, conflicts, configurations[/tags]
<urn:uuid:6afff94e-4d27-467c-864f-c730b8425b62>
CC-MAIN-2022-40
https://www.it-security-blog.com/it-security-basics/despising-the-dreaded-blue-screen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00714.warc.gz
en
0.95378
219
2.53125
3
What is the Internet of Things (IoT)? The Internet of Things (IoT) encompasses any and all products that are connected to the internet or to each other. Any product which requires connection to a home, car or office network to deliver its complete set of features falls under this broad term. In fact cars themselves are now a component of the IoT as they now exchange data with the manufacturer routinely if not continuously. All things IoT, collect data during use and often share that info with their manufacturers without the users being aware that it is being collected. In many cases product functions are dependent upon connection to the internet and may be controlled to a great degree by the manufacturer. This concept of making all components of our increasingly complicated lives communicate with each other, with us, and with internal and external software applications, is what IoT is all about. Why Do We Need IoT Security? Manufacturers of every kind of electronic or electrical device are rushing to add features which require connection to the internet. In their rush to market these companies many of which have no prior experience with networked devices are bound to overlook the complications of hardware and software security design and construction in the haste to get the newest, coolest function working at lowest cost. It is nearly a rule that the makers of products that test these new frontiers will apply the same guidelines to their selection of processing hardware as they do for any other product components they purchase. The oldest chips whose designs were long ago paid off and are now dirt cheap are attractive building blocks for device designs that need only limited capabilities or capacities. Security Comes Last. Testing of the software that is written for a household appliance or child’s toy has only one goal – confirm that it works and will be easy to set up (with lots of default selections even passwords). Security is an afterthought at best. The hardware (chipset) as used in most new products is very old and often has multiple known vulnerabilities. The software that is included with IoT devices and which rarely gets any in-depth security testing almost always has its own set of security issues. The result is that tens of thousands and soon hundreds of millions of appliances, devices and toys being installed into home and business networks are ripe for hacking. And once a vulnerability is discovered in a widely distributed product line there will be thousands or potentially hundreds of thousands of homes and businesses that will be open to having their IoT devices hijacked and potentially opening their entire network to view and attack. What are the Different IoT Applications? On the consumer front, IoT is everywhere. In the smart home, Internet-connected objects such as televisions, thermostats, lights, door locks and even refrigerators are becoming common. They offer homeowners control of home services and functions without actually being home. Smart refrigerators can monitor the amount of milk left and automatically reorder from a preferred store. Washers and dryer’s ring your phone when they are done. Health and fitness-oriented wearable devices that offer biometric measurements such as heart rate, perspiration levels, and even complex measurements like oxygen levels in the bloodstream are some of the examples of wearable IoT-connected devices. In medicine, surgically implanted devices report back to the doctor regarding health status and in some cases accept instruction from medical staff to take action. And all of this data goes back to a central database owned by the manufacturer and provides a stream of data that is potentially hackable. On the move Transportation systems and now cars utilize a large number of sensors, often working in combination with GPS to best get from point A to point B in a safe and efficient manner. Beyond that, cars are getting even smarter. On board navigation systems, diagnostic systems that alert you (and the manufacturer) about everything from faulty lights to tire pressure. IoT for Businesses - RFID tags within anti-theft tags that help retailers in monitoring inventory. - Driverless trucks operate 24×7, increasing production levels. - Critical infrastructure systems such as power generation and delivery systems, water systems, transportation systems are bringing in more IoT devices to improve their accuracy of data and control. - Farms use connected sensors to keep a check on crops and herds to optimise distribution of and pesticides, fertilizer and food. - IoT-connected devices alert shop floor managers about faulty or malfunctioning equipment. - Entire Supply chains that span multiple companies and even continents are integrating their production systems to enable better management of machines and people through monitoring and control of their actions or their locations. IoT generates and shares loads of data and as such the individual devices are susceptible to malicious attacks, data misuse and forced data breaches thus making a strong case for dynamic testing, code, logic and vulnerability assessment at the product development phase itself. What are the Most Vulnerable IoT Devices? According to Gartner, the number of Internet-connected devices is expected to reach 50 billion by 2020. While IoT is going to improve life for many, the number of security risks that consumers and businesses are prone to face will increase exponentially. Stakeholders in the IoT domain face privacy issues, most of the time being unaware of the situation. As such, IoT devices have come under increasing levels of scrutiny in recent months over poor security controls and numerous vulnerabilities. Some of the common problems which have come up due to the spread of IoT include the following: IoT users give their approval for collection and storage of data without having adequate information or technical knowledge. Data collected and shared with or lost to third parties will eventually produce a detailed picture of our personal lives that users would never consider sharing with any stranger they met on the street. Anonymity has been a constant issue in the world of IoT, where IoT platforms barely give any importance to user anonymity in the process of sharing data. Cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat. Many Internet-connected appliances, such as cameras, televisions sets, and kitchen appliances are already enabled to spy on people in their own homes. Such devices accumulate a lot of personal data, which gets shared with other devices or are held in databases by organisations, and they are prone to being misused. Computer-controlled automobile devices such as horns, brakes, engine, dashboard, and locks are at risk from hackers who may get access to the on-board network and manipulate at will, for fun, mischief or personal gain. The concept of layered security and redundancy to manage IoT-related risks is still in a nascent stage. For instance, the readings of smart health devices to monitor a patient’s condition may be altered, which again when connected to another device for prescribing medicines post analysis, will be compromised, and will adversely affect the patient’s diagnosis or treatment. There is a high probability of failure to get access to a particular website or database when multiple IoT-based devices try connecting to it, resulting in customer dissatisfaction and a drop in revenue. Static and dynamic testing for IoT-connected devices As IoT-connected devices become an integral part of our daily lives, it is crucial that these devices undergo thorough testing, and establish minimum baseline for security. If any testing is done at all, static testing is the most frequently implemented process. But static testing is not intended or designed to find vulnerabilities that exist in the ‘off the shelf’ components such as processors and memory into which the application will be installed. Dynamic testing, on the other hand, is capable of exposing both code weaknesses and any underlying defects or vulnerabilities introduced by hardware and which may not be visible to static analysis. Also dynamic testing often turns out to be a more pragmatic way of testing the IoT devices and plays a pivotal role in finding out vulnerabilities that are created when new code is used on old processors. As such, manufacturers who purchase hardware and software from others must do dynamic testing to ensure the items are secure. QA testing for networked hardware and web applications Developers produce applications that to a greater or lesser degree exchange information by adhering to a protocol as closely as possible. QA then tests application functionality against that protocol in the perfect world of the testing laboratory. Given the numerous ways programmers can make mistakes, looking for security vulnerabilities in a piece of software should be an integral part of the development process. Strangely, that is not always the case as testing the security of a particular product can be an expensive proposition and developers often weigh expense against cost of other factors involved in releasing the product to its customers. Because of this, even software developed in an environment stringently cognizant of security risks is most likely released without full testing. Naturally when the application is released, hackers will bash away at it with every possible corrupted form of the protocol to create an error in the application. By pushing at the edges of the envelope of the protocol, they may find a way to trip up the application and create a buffer overflow, the most frequently leveraged design error. How Do Hackers Use Buffer Overflows? How are hackers finding buffer overflow opportunities missed during development and standard pre-release QA? A wide range of tools have been developed by the hacker community to enable the rank and file to find new exploits. These tools, fuzzers, work by creating and feeding a wide range of unexpected or corrupted inputs looking for a combination that will break the application. The production of these tools has become a small industry of its own. The QA world has attempted to adapt these rough and ready hacker tools into their test processes with some success, but also with many headaches. Most of these hacker-developed fuzzers are focused on a single type of code weakness or just on a single protocol or even on a single application. In case of IoT-connected devices, it is important for enterprises to identify traffic patterns and differentiate between the legitimate and malicious ones. For instance, an employee may download some apparently genuine app on a smartphone given to him by his employer, without knowing that the app has some malware. In such cases, the organisation must be prepared with the right set of processes to ensure ample security promptly. Default Credentials in IoT Vulnerability Most IoT devices come with default credentials when used for the first time, which means known administrator IDs and passwords. Also some devices come with a built-in Web server. This helps admins to log in and manage the device remotely. This massive vulnerability can easily encourage hackers to misuse available confidential data.To avoid any data leakage, enterprises must develop a strict assigning process, where the initial settings of the device can be tested, verified to find out any kind of vulnerabilities that may exist, validated flaws that may have been identified should be closed, and a “good-to-go” certification from the compliance team should be issued before the device is brought to the market. Even after all the QA testing being done, buffer overflow error tests, protocol breach tests, and black-box testing should be done to further reduce the scope of adding vulnerabilities to the devices. Translation of Requirements Cause Vulnerabilities Translation of requirements during application development is the first cause of most programming errors. For instance, during the development of a smart fridge application, a project manager translates the requirements from the desired end to the programming team, which members translate to individual programming assignments. The programmers then translate the assignment into a proper syntax for the programming language written by someone else, which a programming language interpreter translates into the corresponding machine code. All these translations are sources of potential programming errors during the design stage. Off-by-one errors, programming language use errors, integer overflows are all examples of errors generated by a programmer while translating a concept to a proper algorithm. For example, to hold ‘n’ items that are each ‘m’ bytes long, the programmer may tell the program to allocate n*m bytes. If m*n is larger than the biggest number that can be represented, less memory will be allocated than intended. This may lead to a buffer overflow. In another instance, if a programmer assumes that a variable contains only positive integers, but if the integer in question is actually a signed integer, arithmetic operations can cause an overwrite of the leftmost bit and make the result a negative number, possibly leading to an exploitable behavior. What is IoT Exploitation? Not all programming errors are created equally. Some allow attackers to gain something or to get an ability they didn’t already have. They may be able to deny other users’ access to the program by crashing it, or access information they shouldn’t be able to. In some cases, they may be able to cause the program to execute any command they tell it. These errors are vulnerabilities. Other errors, while they may have the same causes, won’t give attackers any access they didn’t already have. So, the first task for a vulnerability researcher is to determine if the programming error is merely a bug or if it can lead to exploitation. If a bug can lead to exploitation, either by itself or when used in concert with other bugs, it is indeed vulnerability. Buffer overflows and vulnerabilities caused by the application not checking space availability before copying un-trusted data into the pre-allocated space in the system memory, end up overwriting contents of memory outside the buffer. As a result, next time the program looks at that memory space, it sees data from the overflow instead of the original data. If the program tries to use values from that area, it will most likely not see what it expects, the consequences of which can range from a crash of the program to other more potentially dangerous actions like DoS or worse, execution of a new malicious code planted by someone. A stack-based buffer overflow can allow attackers to execute code on the victim’s computer, as it overwrites memory addresses that will be used later, while a “stack overflow” typically results in a DoS, as it tries to write to memory that isn’t available.
<urn:uuid:6d90a14d-b909-4893-8812-f79948e3e3c8>
CC-MAIN-2022-40
https://blog.beyondsecurity.com/security-testing-the-internet-of-things-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00714.warc.gz
en
0.942285
2,888
3.109375
3
We’re excited to recognize Hispanic Heritage Month (September 15 – October 15) in the U.S. and around the world. This is a time to honor the history, culture, contributions, and influence of generations of Hispanics with roots in America, Spain, Mexico, the Caribbean, and Central and South America. The month is celebrated in countries around the world by celebrating the rich cultural histories and contributions of the Hispanic community. Hispanic Heritage Month begins September 15 each year in the U.S., as this is the anniversary of independence for many Latin countries. In the U.S., 57 million ‒ one-fifth the population ‒ are of Hispanic or Latino origin. Hispanics influence all aspects of American life. Established by congress as an official month of celebration, cities and communities all across the country celebrate this important occasion. Celebrations also occur in countries around the world: - Mexicans celebrate their independence with a day of fiestas on September 16. Flags, flowers and decorations in red, white and green ‒the colors of the Mexican flag ‒ are put up and performers fill the streets. - October 12 is celebrated as Hispanic Day or the National Day of Spain. The Spanish sponsored Columbus’s historic voyage and they mark this with a public holiday. Traditionally, the King of Spain raises the Spanish flag in Madrid. This is followed by a military parade and an air show. - In the Bahamas, October 12 is known as Discovery Day, in Costa Rica it is Day of the Cultures, and in Argentina it is called Day of the Races. While in Mexico, Chile and Venezuela, October 12 is celebrated as the Day of Indigenous Resistance. - In Canada, Latin Hispanic Heritage Month is celebrated in October and is an opportunity to remember, celebrate and educate future generations about the outstanding achievements and contributions of Hispanic people. The Hispanic community has contributed much to global culture, life, education, science, entertainment, arts, sports and food. Click here to learn more about Hispanic Heritage month and culture from the U.S. National Archives. Infoblox proudly celebrates National Hispanic Heritage Month with our employees of Hispanic and Latinx descent.
<urn:uuid:23d5c07b-04a9-4016-bf6f-9191f7f118c1>
CC-MAIN-2022-40
https://blogs.infoblox.com/company/infoblox-celebrates-hispanic-heritage-month/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00714.warc.gz
en
0.944856
442
3.390625
3
For over a decade, Facebook has invaded people’s lives and changed completely the way people communicate, network and share information. It’s the world’s most popular social networking website, which allows millions of users every minute to post comments, share content (like photographs and video), post interesting information (like news and articles), as well as chat, live with friends and colleagues. These functionalities are gradually expanding: for example, during the last couple of years, Facebook makes it possible to order food and to conduct other types of e-commerce transactions. Recent Facebook statistics are simply breathtaking: There are over 2 billion monthly active users on Facebook, including 1 billion daily active users. The scale of these numbers becomes evident when compared to other platforms of the social networking ecosystem e.g., Instagram has 700 million monthly active users, while Twitter has more than 330 million monthly active users. Moreover, it can be safely said that Facebook is the community of the younger generation as 88% of the population of people using the platform are aged 18-29 years. Combined with Facebook’s stock value evolution, these stats show that the platform has a very bright future. Nevertheless, a serious data protection incident was revealed earlier this year, which questioned Facebook’s credibility and raised concerns about the impact and implications of its future growth. In particular, as part of the notorious Cambridge Analytica scandal, raw data from many millions of Facebook profiles was leaked to a political consulting firm and this data was (ab)used as part of the political campaign. This data breach incident is not the first and the sole one on the Internet as they have been several similar incidents on other popular platforms like Yahoo, Uber, and Instagram. However, due to the size of the Facebook platform and the volume of data involved, it has attracted great attention by the media and public. The Cambridge Analytica case involved the exposure of Facebook data to a researcher who was a member of the team which was in charge of political campaign. The exposure was partly based on the development of a quiz-like Facebook app, which was able to collect data from all people (i.e. Facebook profiles) who took the quiz. However, the leak happened when the app was also able to collect data from the Facebook profile of the friends of quiz takers, which increased the amount of data that were collected and later processed. Speaking in numbers, it is estimated that the quiz was taken by approximately 2,70,000 users, while the profiles that leaked ended up being nearly 87 million(!). Note that access to these additional profiles was made possible due to a security hole in Facebook’s API. While Facebook prohibited any sales or commercial exploitation of the data acquired through this API method, Cambridge Analytica went on exploiting these data. As in most cases of security, privacy, and data protection issues, the ethical analysis is pretty complex: Multiple stakeholders are involved with different roles and actions, which violated laws and ethical rules in various ways. However, despite the unlawful and unethical activity of Cambridge Analytica, the case revealed problems and vulnerabilities of Facebook as well. This was clearly acknowledged by Facebook founder and CEO Mark Zuckenberg, who accepted Facebook’s responsibility and mentioned that the company has been doing a thorough root-cause analysis to find out what had happened. He also asserted that Facebook was intensively working to close any security and privacy holes, as a means of ensuring that similar incidents won’t happen in the future. Despite immediate and positive reaction by the company, the Cambridge Analytica case revealed internal and external weaknesses of the social networking giant: While Facebook is working on the above-mentioned issues, it has recently i.e. during September 2018, faced one more attack on its computer network, which has resulted in the exposure of the personal information of nearly 50 million users. This is considered the largest direct security breach in the company’s history. It was based on the exploitation of a security vulnerability in Facebook’s code by attackers that gained access to user accounts and in some cases, they took control of them. This major security incident came on top of the Cambridge Analytica scandal to remind the community that their Facebook data are not secure. Overall, following these cases, there is an on-going debate about whether users can trust Facebook to store and manage their personal data. This debate highlights the important role of Facebook developers and apps which could be able to exploit holes in the security system of the platform that could lead to data breaches. It has also given rise to an immense debate about the measures needed to avoid similar episodes. As part of this brainstorming, the following protection and preventive measures can be listed: These measures refer to what the platform and its users can do to protect themselves from future breaches. However, there is also a discussion about new social networking platforms, which could decentralize data storage and processing in order to avoid large volumes of data to be controlled by a single administrative entity. In this direction, some researchers are experimenting with blockchain-based networks that decentralized data ownership and enable end-users to retain control of their personal data in all cases. While such models can be promising, they are still at the research stage. Therefore, users must practice caution while using social media platforms until stable and secure security methodologies are implemented. Advantages of Data Tokenization for enterprises The benefits of cybersecurity mesh for distributed enterprises The Rising Cybersecurity Threats CIOs cannot afford to ignore Six Factors Affecting Security and Risk Management in the Post COVID Era Surviving Cybercrime in 2021: Guidelines for Effective Cybersecurity Investments Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:7ab0f4c4-5c38-4248-8db1-1f7b7719262e>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/the-data-scandals-on-the-facebook-platform/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00714.warc.gz
en
0.959504
1,370
2.875
3
ITEM: Dockless e-scooters are touted as green solutions for urban transport, but a new study says that actually they aren’t, once you factor in things like how they’re manufactured, how often they’re replaced, and making sure they’re fully charged. One of the key selling points of e-scooters as a last-mile public transport option is that they’re better for the environment because they replace cars that would otherwise be on the city streets. In other words, for every person riding an e-scooter, that’s one less car polluting the air with CO2, while the e-scooter’s emissions are far less than the car it’s replacing. That’s technically true, but tailpipe emissions aren’t the whole story, according to a research team at North Carolina State University, which argues in a research paper that to properly assess the carbon footprint of an e-scooter scheme, “full consideration of the life cycle impacts is required to properly understand their environmental impacts.” That means looking at things like the raw materials required to manufacture the scooters, the manufacturing process itself, the actual lifecycle of the e-scooter (from the time it hits the street to the time it needs to be replaced) and the vehicles required to drive around the city collecting scooters, taking them to a charging station, and then putting them back in locations where people are likely to use them. Once you include those elements, the research paper says, in 65% of its simulations, e-scooter programs generated higher emissions per passenger mile compared to other alternatives, even when you take into account the CO2 savings from cars that weren’t on the road as a result. The paper says that raw materials and manufacturing account for 50% of e-scooter carbon footprints, while daily collection of scooters to charging stations accounts for 43% (the actual charging process also contributes to the total carbon footprint, but it’s relatively small compared to the above two items). One reason raw materials and manufacturing takes up a huge chunk has to do with the fact that e-scooters have a short lifecycle. In theory they can last at least a couple of years when used properly – in reality, they’re lucky to last 30 days, as MIT Technology Review has noted: Scooters are variously flung into water bodies, tossed from buildings, set on fire, run over, and used in stunts. Cleanup crews in Oakland, California, fished 60 scooters out of Lake Merritt in a single month last year, Slate reported. An analysis of open data from Bird’s inaugural fleet in Louisville, Kentucky, conducted by Quartz last year, found that the average scooter lasted just 28.8 days. Likewise, Bird itself acknowledged in investor documents at an earlier point that its vehicles last only about a month or two, The Information reported. The good news, the report says, is that if e-scooter companies find ways to address these specific problems, the eco-friendliness of scooters can be increased dramatically. For example, they could use electric vehicles to collect scooters and centralize management of the collection process to reduce how many kilometres they need to drive to pick them up. It would also help if scooter manufacturers built them with more recyclable materials. Meanwhile, municipal governments could help with policies that allow e-scooters to remain in public areas overnight (especially if they don’t require recharging) and enacting or enforcing anti-vandalism policies to reduce the abuse of e-scooters that results in short lifetimes. But until then, the researchers say, “Claims of environmental benefits from their use should be met with scepticism.” Meanwhile, there’s always pogo sticks, I guess.
<urn:uuid:316059ca-9e13-430e-832a-13abe01d87f0>
CC-MAIN-2022-40
https://disruptive.asia/e-scooters-not-so-eco-friendly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00714.warc.gz
en
0.940098
808
3.21875
3
By Steve Grobman, Chief Technology Officer, McAfee Corp. Until recently, quantum computing has been largely theoretical. Recent advances, however, show that it has the potential to address an array of seemingly intractable challenges: global warming, protracted drug and vaccine development, world hunger resulting from inefficient food distribution, difficulty forecasting bio-risks, and national defense preparedness, among others. Quantum computing’s ability to crunch previously unimaginable computational challenges would help find answers to these and other problems, partly by optimizing machine learning and artificial intelligence on a massive scale. But these rewards also come with a certain amount of risk. Quantum computing could also enable decryption of data that is currently protected by traditional computing encryption algorithms. This would undermine our entire digital universe as we know it today. Adversarial nation states such as China and Russia are in a technology innovation race with the U.S. and its allies, and they are not likely to share the extent to which they have successfully developed quantum computing or related decryption capabilities. There is precedent for this: Bletchley Park’s success in cracking Germany’s Enigma code during World War II – an achievement that was kept secret until the 1970s. The Allied code breakers understood that concealing their achievement would ensure that the Axis Powers would continue to use the Enigma coding machines to their strategic disadvantage. In order to mitigate such a risk, we cannot think of quantum in terms of “eventually” or “tomorrow.” We must address the quantum decryption risk today, before it manifests itself in catastrophic intelligence security incidents tomorrow. How is quantum computing different? Traditional computing, based on transistors and electricity, uses software written in binary code. Quantum computing harnesses quantum mechanics using two properties of subatomic particles, superposition and entanglement, to process data. Superposition allows data to exist in multiple states and places simultaneously; entanglement allows computation of extremely large sets of opportunities. Together, these properties will allow quantum computers to perform with exponentially increased efficiency and speed. The vulnerability of traditional encryption Encryption is everywhere. In our digital infrastructure, sophisticated security algorithms encrypt all manner of data: stored and transmitted data; data residing in individual applications; and public sector, business and personal information. Encryption is embedded in the web infrastructure, digital certificate ecosystems, and code-signing ecosystems. Traditional encryption does two things. It secures data so that only parties with appropriate rights can access it. It also ensures integrity, for instance when it confirms that the bank website with which you are transacting and sharing personal data, is not a fraudulent one. The processing power of quantum computing will dwarf the capabilities of even today’s supercomputers for certain workloads, making it possible to break current secure encryption methods and compromise all encrypted data transferred over the internet. This would enable adversaries to impersonate trusted entities, access data, and decrypt all manner of information, including corporate intellectual property, government secrets with national security implications, and personal information such as biometric data, health and financial records. In fact, that incursion is undoubtedly already underway. Bad actors are likely siphoning sensitive encrypted data off the Internet today and simply holding onto it until – maybe 5 or 10 or 15 years from now – advances in quantum computing make unlocking it possible. Gauging vulnerability: probability vs. impact Humans are not great at recognizing low-frequency, high-impact risks. One need only think of the world’s initial assessment of the COVID-19 pandemic. Quantum computing falls into this same category and, much like the current pandemic, the impact is potentially staggering. It needn’t be like this. If I told you there was a .0001 percent chance your car would blow up the next time you started it, you probably wouldn’t even step foot inside your car, much less start it. In this case, even though the frequency of such an occurrence would be low, the impact – getting blown into smithereens – is obviously high. And that is the lens through which we should view quantum computing’s current threat to our encrypted data. Corporate leaders and security professionals need to assess the threat that a quantum-enabled cyberattack poses to their organizations and its customers. If the potential harm is great, they need to prepare for the threat, even if the probability, at least for some time into the future, is low. Evaluating data: sensitivity vs. value over time In preparing to defend against future quantum cyberattacks, it is necessary to assess the current data being protected. This means determining both how sensitive it is and how long it must be protected for. Some data is very sensitive but valuable for only a short time, such as pre-release earnings data for a publicly traded company. Between the end of the quarter and the reporting of financial results several weeks later, this information is highly sensitive. Other data is not as sensitive but is valuable for a long time. Think of Social Security numbers which are used for the lifetime of the holder. Many have already been compromised, though, meaning they are no longer as sensitive as they once were. Still other data is highly sensitive and has a long value horizon. This could include business-critical intellectual property or trade secrets, which, if stolen during the extended period that they provide a competitive advantage, may allow a competitor to create a variant that puts you out of business. Nation-states like the U.S. also have long-term sensitive data. Even now, some data from U.S. President Kennedy’s 1963 assassination continues to be classified for national security reasons. What can we do? Incentives drive behaviors, and use cases generally drive investments in IT. We need to ask ourselves: which market forces will drive positive quantum computing use cases, and which will drive cyber-criminality? While we may seek to use quantum computing for good, one of the biggest risks of such a transformational technology is that our cyber adversaries may be more motivated, or motivated earlier, to use the technology for nefarious purposes than we realize. For this reason, it is important to raise awareness – and concern – about the quantum threat we face. But once we acknowledge the threat, what can we do to counter it? Determine priorities. We need to create quantum action plans that take into account the importance and value of different types of data over time, and then set priorities for data protection. Be sure not to dismiss low-probability risks with potentially catastrophic outcomes. Generally, invest in security initiatives that address high-probability/low-impact and low-probability/high-impact risks. Move to quantum-resistant algorithms. We can develop quantum-resistant algorithms before powerful quantum computing capabilities become viable. The National Institute of Standards and Technology (NIST) is currently evaluating candidate algorithms to replace our current public key capabilities. Map your encryption systems. Organizations and government agencies in particular need to assess how they currently protect data from theft and prevent decryption, then retool systems that they find are inadequate. Some environments are historically slow to adopt next-generation security capabilities. For example, there are government agencies still running 1950s-era COBOL-based apps on some systems. Developing a comprehensive understanding of where traditional encryption is used and what potential risks exist in each domain should start today. Develop Incentives. Bad actors may have stronger incentives to use quantum computing for malicious intent than those of us who want to use quantum computing for good. Governments and the technology industry need to partner in influencing market forces that drive faster adoption of quantum resistant algorithms and look for ways to utilize quantum computing for good. Such moves will help companies – and nations – prepare for the security risks posed by quantum computing, along with all the benefits it will bring. The first step, though, is recognizing that it is never too early to prepare for future risk, especially when the stakes are as high as they are with quantum computing and the threat it poses to encryption. Quantum computing could enable decryption of data currently protected by traditional encryption algorithm, putting all kinds of data at risk, from corporate intellectual property to personal information such as biometric data. We need to defuse this risk long before it manifests itself in security incidents. In assessing your organization’s risk, first, evaluate the sensitivity of types of data over time, and second, gauge its vulnerability: If the potential for harm is great, you need to prepare for the threat, even if the current probability is low. There are a number of steps organizations can begin taking now, including mapping your current encryption systems and moving to quantum-resistant algorithms as they are developed.
<urn:uuid:348137f0-7a33-4a15-a7e4-34516c30c015>
CC-MAIN-2022-40
https://straighttalk.hcltech.com/articles/facing-quantum-computing%E2%80%99s-future-risks-%E2%80%93-now
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00714.warc.gz
en
0.937152
1,802
3.203125
3
It’s a dilemma that is expected to continually gain scrutiny over the next few years: Data centers will be expected to find more energy efficient ways of operating, cutting down on their amount of consumption. At the same time, the demand for capacity will grow at an accelerated rate. There’s another underlying problem. According to the National Resources Defense Council (NRDC), many data centers are not just using a massive amount of energy — they’re consuming more than they actually need. The report revealed that if America’s 3 million data centers adopted best practices that could lead to half of savings potential, they could cut electricity consumption by up to 40 percent — a savings of $3.8 billion, enough to provide electricity to all the households in the state of Michigan. To address a data center’s PUE (power usage effectiveness), steps must be taken to operate at a more optimal level, said Kelly Quinn, a research manager at IDC. Quinn noted that most enterprises are operating “very far from optimal efficiency,” as a result spending more than required on power. “It’s always been a problem, but people are becoming more conscious of it now,” Quinn said in a Forbes.com blog. In the same report, NRDC pointed to the use of colocation and hybrid IT as the key to reducing data center energy use. The formula? Colocation for renting out services of a professional data center and hybrid IT for managing some IT resources internally along with cloud-based services. “A colocation provider works so hard to keep its PUE ratios down,” Quinn said in the Forbes blog. “Consequently, they are able to provide … better or lower power costs.” According to the Uptime Institute, the typical data center has an average PUE of 2.5 although many could reach a more optimal level of 1.6 PUE by using more efficient equipment and best practices. Data centers also can work on reducing energy by checking cooling systems designed to keep the data centers running at optimal levels. This can include updating equipment, checking, cleaning and replacing filters, and improving airflow. Lifeline Data Centers, a colocation provider with facilities in Indiana, is committed to operating with energy efficiency as a priority. Schedule a tour to find out how we can help you reduce your footprint.
<urn:uuid:94a1d36d-2748-4e4c-99ce-1b7080c4c755>
CC-MAIN-2022-40
https://lifelinedatacenters.com/colocation/data-center-energy-use/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00114.warc.gz
en
0.950217
496
2.734375
3
The term “Business Intelligence” (BI) was originally coined in 1865 by Richard Millar Devens in the “Cyclopaedia of Commercial and Business Anecdotes” and Devens used it to describe how the banker Sir Henry Furnese gained profit by receiving and acting upon information about his environment. Today, BI represents tools and systems that allow a company to gather, store, access and analyze corporate data to aid in decision-making and strategic planning process. You can read some of the well-known definitions of Business Intelligence below. Popular definitions of Business Intelligence 1. “The ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal.” – Hans Peter Luhn, IBM, 1958. 2. “Business Intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making.” – Forrester Research. 3. “Business intelligence is defined as getting the right information to the right people at the right time. The term encompasses all the capabilities required to turn data into intelligence that everyone in an organization can trust and use for more effective decision making.” – Bogza, R.M., Acad. of Econ. Studies of Bucharest, 2008. 4. “Business intelligence (BI) is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance.” – Gartner, Inc. 5. “a set of concepts, methods, and processes to improve business decisions using information from multiple sources and applying experience and assumptions to develop an accurate understanding of business dynamics.” – Brackett, 1999. 6. “Business intelligence (BI) is a concept which refers to a managerial philosophy and a tool that is used in order to help organisations to man-age and refine information and to make more effective business decisions.” – Ghoshal and Kim, 1986. 7. “The term “Business Intelligence” (BI) denotes integrated approaches to support management in a company and is usually associated with Data Warehouse Systems (DWHs), which provide an integrated, subject-orientated, and non-volatile repository for diverse analytic and reporting applications.” – Heiner Lasi. 8. “Getting the right information to the right people at the right time.” – Bill Cabiró 9. “The ability of an enterprise to act effectively through the exploitation of its human and information resources.” – Larry English 10. “BI consists of two diametrically opposed activities: top-down, metrics-driven reporting, and dash boarding where you know in advance what things you want to monitor, and bottom-up, ad hoc analysis to answer unanticipated questions.” – Wayne Eckerson 11. “Business intelligence (BI) is the use of computing technologies for the identification, discovery and analysis of business data – like sales revenue, products, costs and incomes.” – Techopedia 12. “The term Business Intelligence (BI) refers to technologies, applications and practices for the collection, integration, analysis, and presentation of business information.” – OLAP.com 13. “Business Intelligence is a system that turns data into information and then into knowledge thereby adding substantial value to firm’s decision making processes.” – Loshin, 2003. 14. “Business Intelligence (BI) is a term that defines a set of informatics applications with economical background, used into companies to analyze data in order to transform them into information that will be the base of decisions taken by managers.” – Airinei, D. & Berta, 2012. 15. “The process of gathering and analyzing internal and external business information.” – Okkonen et al., 2002. 16. “BI is an architecture and a collection of integrated operational as well as decision-support Applications and databases that provide the business community easy access to business data.” – Moss & Atre, 2003. 17. “Information to better understand business and to make more informed real-time business decisions.” – Papadopoulos & Kanellis, 2010. 18. “An organized and systematic process by which organizations acquire, analyse, and disseminate information from both internal and external information sources significant for their business activities and for decision-making.” – Lonnqvist & Pirttimaki, 2006. 19. “BI means leveraging information assets within key business processes to achieve improved business performance.” – Williams & Williams, 2007. 20. “Business Intelligence (BI) refers to various solutions for enhancing the overall business performance.” – Wang & Wang, 2008.
<urn:uuid:80e47638-370a-4589-9abf-a7133200d6b4>
CC-MAIN-2022-40
https://www.crayondata.com/what-is-business-intelligence-20-popular-definitions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00114.warc.gz
en
0.921491
1,032
3.1875
3
How to reduce and control your digital footprint? A digital footprint refers to data we leave behind after going online. On the surface level, it covers all the details we submit or disclose deliberately. Your social media account is an accurate representation, containing personal information and life events you chose to share. However, your digital footprint holds more than the data you reveal consciously. What is a digital footprint? A digital footprint is a general term for all the traces you cast off when browsing. Essentially, it refers to data about us, our actions, preferences, interests, or browsing routes. Typically, we tend to exclusively focus on the details we can easily see and recall sharing. For instance, a comment on YouTube is a visible example that you can find and control relatively quickly. However, a digital footprint also consists of information you hand over without even realizing it. Essentially, such information is a byproduct of each action we take in the digital world. Thus, it ends up as a component of our digital footprint without our deliberate consent or knowledge. Thus, every digital step you take leaves an invisible trail leading back to you. Such continuous data-harvesting contributes to the internet as we know it. Over time, the internet becomes much more narrow, prioritizing content according to our previous browsing habits or preferences. In addition to tailoring the internet for each visitor exclusively, our digital footprint can negatively impact our privacy and go as far as trampling on our future. Thus, we can classify digital footprints into two categories. The first, referred to as active (intentional) footprint, contains all the information you share yourself, such as: - Comments in forums or other sites. - Email messages. - Blog posts. - Social media posts and status updates. - Photos or videos. The second category refers to passive (unintentional) data you share online without realizing it. It includes: - Browsing history. - Information accumulated via cookies. - IP address. - Geo-information created through the use of geo-tracking services. Why should you care about your digital footprint? Digital footprint basically determines who you are online. Each action you make paints your portrait, depicting your identity, preferences, interests, and routines. Considering that all this data is available for someone, it is worthy of concern. For instance, your social media account is rich in data. Depending on your privacy settings, there is a chance that anyone gets access to it. In a recent Washington Post article, specialists emphasized that your digital footprint can be highly influential. The report explained that many colleges now turn to social media during their admissions processes. Thus, the information available within your profile can make or break your chances of getting into college. Employers might also run extensive background checks for new candidates. It is also possible that they keep tabs on what their teams post during their time at the company. Although your Facebook posts might seem insignificant, they could potentially ruin your chances, be it in education or career-wise. Besides the overflowing details you release, the passive digital footprint reaches sources you never expected. Companies monetize it frequently for customizing marketing offers and showing tailored ads. Thus, as soon as you visit any website, it can know a lot about you. It can determine your location, the time you spend there, and implant cookies to track your actions once you leave. Since all of this happens under-the-hood, such data is much more difficult to control. Other risks of revealing personal data online - Identity theft. Releasing your personal information online dramatically increases your chances of identity theft. With enough data, fraudsters or criminals can attempt to impersonate you, usually for financial gain. - Physical danger. Disclosing your location, especially in real-time, is incredibly dangerous. Burglars could determine the times when no one is at your house. Revealing too much via geo-tags could also help stalkers, abusers, or other creeps keep tabs on you. Additionally, specialists now strongly discourage people from posting pictures of their home keys. It is possible to recreate keys from a single image. - Phishing and other fraud. Criminals can no longer rely on generic phishing campaigns. Their attempts to trick can sound highly believable, all thanks to the information they retrieved about you. Thus, be wary of social engineering and how fraudsters can use it in their nefarious plans. Tips for controlling and reducing your digital footprint It is unlikely that you will abandon your digital presence altogether. However, there are ways to be more aware of your digital footprint and limit its influence. - Control the information you share. While you might like being open online, always reconsider the posts or details you disclose. There are many dangers related to oversharing, so try to keep your digital footprint to a minimum. Be sure to set boundaries on what you post and share. After all, that data might be available forever and, in some cases, might even outlive you. - Manage your privacy settings. Consider restricting access to your social media accounts. Instead of leaving them public, set them to private. This change means that only your inner circle will be able to see your profile fully. While you are at it, do not accept friend requests from bizarre accounts or people you do not know. - Have a secondary email address. You might already have an email account that you use for everything. However, it is best to use multiple email addresses. One should cover all confidential affairs like using it for your bank account. Your secondary email should deal with everything else, such as registering to online stores and receiving promotional newsletters. - Close old accounts. You might have some accounts that have lost their usefulness. Try to find all such accounts and close them. If this is not an option, at least take down all the personal information available within them. - Be respectful online. Your digital footprint is not always negative. In some cases, it might showcase your previous accomplishments. However, you may have written some mean comments at one point or another. The general rule is to be polite and avoid spreading gossip or slander. Such trolling could come back to haunt you and paint you in a rather negative light. - Opt out of permitting websites to sell your data. You can prevent companies from sharing your digital footprint with advertisers and other third parties. Try to find such settings via apps or websites you use. Thus, you will protect your information from being sold or shared with unknown entities. - Hide your IP address and encrypt web traffic. You can limit the passive digital footprint you unwittingly disclose. Your IP address is one of the pieces of information that you share without even realizing it. A VPN (Virtual Private Network) can help you mask it once you connect to a remote server. Thus, instead of showing your actual location, you will pinpoint another spot on the map. Additionally, a VPN encrypts information about your browsing activities, meaning that companies won’t be able to keep tabs on you as easily.
<urn:uuid:aca3c9c8-7b1b-4fa1-8815-e1fa4ccd1d14>
CC-MAIN-2022-40
https://atlasvpn.com/blog/how-to-reduce-and-control-your-digital-footprint
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00114.warc.gz
en
0.935283
1,434
3.25
3
The Russia-Ukraine crisis has prompted concerns of cyber attacks, especially against critical infrastructure companies. For years, Ukraine has been under near-constant cyber threats from Russia. In 2015 and 2016, Ukraine’s power grid was attacked by cybercriminals and is still vulnerable today. Malware, called NotPetya, was also unleashed in 2017 on the Ukrainian financial, energy, and government sectors, but the impact quickly spread around the world. The business implications of conflict in Ukraine will be felt far beyond the region’s borders. Lindy Cameron, CEO of the National Cyber Security Centre (NCSC), says that “cyber attacks do not respect geographic boundaries,” warning that these incidents have international consequences — intentional or not. The following are simple steps to increase your cyber security posture. 1. Apply Patches and Security Updates Patching is the process of issuing regular security updates to address vulnerabilities in a company’s operating system. Many cybercriminals look to exploit unpatched software as an easy backdoor into your network. It’s important to stay alert to software updates for your smartphone, tablet, laptop, or any other device with known security vulnerabilities, and do not hesitate to click yes. 2. Use Strong Passwords Cybercriminals can breach a network by simply guessing usernames and passwords, particularly if the organization uses cloud services such as Microsoft Office 365 or Google Workspace. You should require employees to set unique passwords, using a combination of numbers, special characters, and upper and lowercase letters for greater security. You may also want to establish a policy that requires users to change passwords regularly — at the very least, quarterly. 3. Use Multi-Factor Authentication Multi-factor authentication (MFA) adds an extra layer of security to your systems, networks, and data and should be applied to all users. MFA requires a user to provide at least two pieces of identification to gain access to a resource. The benefit of multi-factor authentication is that even if a username and password are compromised, it’s still very difficult for an attacker to access the account. 4. Teach Phishing Awareness Phishing scams are one of the most common types of cyber attacks. You should be training your staff on how to identify and report phishing emails for further investigation in order to prevent these attacks. More advanced phishing scams are occurring all the time, so it’s crucial to keep a lookout for news regarding the latest techniques and share this information with your users — people who work with you or for you. 5. Use Antivirus Software and Ensure That It Works Installing the most up-to-date antivirus software on your devices can significantly reduce the risk of a cyber attack. AV software and firewalls can help detect suspicious links, malware, and other threats distributed by IP addresses. It’s important to install antivirus software on all of your devices and have someone responsible to check that the software is active and properly working. 6. Know Your Network Information security (infosec) teams should actively identify all devices and users on the network and detect potential suspicious activity. If a user account is accessing files they do not necessarily need for their job or moving them to other irrelevant parts of the network, that may be an indication that the account has been comprised by cybercriminals attempting to plant malware. Moving forward, log your network activity for at least a month to look back on to see where a potential breach may have occurred. 7. Back up Your Network and Test Backups Regularly Backups are a vital component of cyber resilience. If a cyber attack happens, your data could be compromised or deleted for good. Given the amount of data your company stores on laptops and mobile devices, you might not be able to recover following a breach or attack. Utilize a backup program that allows you to schedule backups at regular intervals and automate the process. 8. Limit Third Party Access to Your Network and Supply Chains Managing IT networks sometimes requires organizations to bring in outside help, providing third party users with high-level access. Implementing layered security can help minimize vulnerabilities and prevent users from accessing sensitive information. In addition, you should be mindful of the security practices of your suppliers. If one of them is breached, their network could be used as a gateway to the larger target. 9. Have An Incident Response Plan Planning ahead and running exercises can greatly reduce the impact of a successful cyber attack. If the network is down, how will you communicate a response? You should develop an incident response plan (IRP) involving all departments that outlines what steps to take and who is responsible for collecting, analyzing, and acting upon information gathered from the incident. 10. Brief the Wider Organization It is the job of the information security team to know about cyber attacks and how to deal with them. It is unlikely that your employees are familiar with cyber security best practices outside of your security team. Employees from the top-down should be aware of the importance of cyber security and clearly understand how to report suspected security-related events. For a business to be secure, everyone must play a part. Expert Cyber Security Consultants in NJ & FL At Mindcore, we help enhance your organization’s cyber security to minimize business disruptions so you can focus on your most important goals and objectives. Our cyber security consultants in New Jersey and Florida will create a personalized defense strategy and assess, monitor, and manage your critical IT infrastructure 24/7. To schedule a consultation, contact us today. Learn More About Matt Matt Rosenthal is a technology and business strategist as well as the President of Mindcore, the leading IT solutions provider in New Jersey. Mindcore offers a broad portfolio of IT services and solutions tailored to help businesses take back control of their technology, streamline their business and outperform their competition.Follow Matt on Social Media
<urn:uuid:4b5715f4-b941-428a-ad9a-48e52cdbf502>
CC-MAIN-2022-40
https://mind-core.com/blogs/cybersecurity/10-steps-to-improve-your-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00314.warc.gz
en
0.929707
1,213
2.5625
3
Over the past decade, we've seen a vast array of different types of devices and systems connected to the Internet. While this feels like technological progress, there's a dark side to bringing the Internet of Things (IoT) online—these systems come under attack just like other connected infrastructure. Unfortunately, many SCADA environments today include connected systems with relatively weak security capabilities and configurations, leading to compromise and breach scenarios that are not only dangerous, but could be deadly. In February 2021, a Florida water plant was compromised remotely, and the attacker attempted to modify the water's chemical makeup. Researchers at CyberNews found 11 breached credentials linked to the water plant from 2017, as well as 13 sets of credentials right before the attack. The attacker on the water plant leveraged a consumer-grade remote access tool to gain access to the plant’s SCADA controls and subsequently changed the level of sodium hydroxide in the water (commonly known as lye), from 100 parts per million to 11,100 parts per million. Luckily, in this case, the modification was detected immediately by one of the plant operators, who reverted the changes before this breach had any impact on the system or the health of the community. I think we all know that this could have gone terribly, though, and we’ve been talking about these kinds of attacks in the security community for years. As if that wasn’t bad enough, a bit later, on March 9th Bloomberg reported a massive security breach into the Verkada network that exposed the live feeds of 150,000 of security cameras used in jails, hospitals, and many high-profile companies. The threat actors claimed to have had complete access to an archive of full video for all Verkada customers, which poses major data privacy, security, and even political implications. This breach really illustrated the root of the problem – excessive privileges in IoT/OT platforms and products. The Verkada breach came about as a direct result of a compromised “super admin” account that was remotely accessible. This last point is important – much has been said about privilege management and admin accounts that should be more carefully controlled, but the remote access to the services and platforms USING these accounts is often less publicized. In the Florida water treatment plant breach, the attacker gained remote access using admin credentials. The same situation happened in the Verkada compromise. So, what have we been missing? How do we overcome these types of compromise scenarios? First, it’s critical to realize that remote access has often been provisioned without careful consideration of privileged access scenarios. Compounding this issue is the unique challenge facing OT/IoT environments, with services and platforms that may be somewhat unforgiving in their mode of access. The good news? We have a lot of lessons learned, and much better technology today that can help to resolve these IoT security challenges. To learn from more of these real-world breaches, check out my on-demand webinar: Poisoned Privileges: The Wake-Up Call to Harden Remote Access & Password Security for SCADA & IoT Systems. This webinar will also explore the processes you can implement to mitigate privileged remote access risk for all types of environments, including IoT and OT. Of course, since the date of my live webinar (April 13th), the attacks on critical infrastructure have not stopped. In May, we saw the devasting cyberattack by DarkSide on Colonial Pipeline, taking much of the U.S. East Coast’s fuel supply offline, causing panic at the pump, and disrupting tens of millions of lives for weeks. To learn more about DarkSide attacks and how to formulate a strong cyber defense posture, check out this BeyondTrust blog: Will DarkSide Pipeline Ransomware Attack Fuel Cybersecurity Upgrades for Critical Infrastructure? Dave Shackleford, Cybersecurity Expert and Founder of Voodoo Security Dave Shackleford is the owner and principal consultant of Voodoo Security and a SANS analyst, senior instructor, and course author. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering, and is a VMware vExpert with extensive experience designing and configuring secure virtualized infrastructures. He has previously worked as CSO for Configuresoft, CTO for the Center for Internet Security, and as a security architect, analyst, and manager for several Fortune 500 companies.
<urn:uuid:34796e3b-7792-45ec-81f8-3751efe9ac19>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/scada-and-iot-security-what-is-broken-can-it-be-fixed
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00314.warc.gz
en
0.953974
905
2.921875
3
Elementary school students are conscientious citizens of the earth. They recycle everything. Across the country, aluminum cans, newspapers, and milk cartons are diligently sorted into different colored bins for proper recycling. Try throwing an empty soda bottle in the trash in front of an elementary school student—they simply will not allow it. If they can take the time to keep us honest about recycling harmless products like paper and plastic, why are we having such a hard time figuring out what to do with truly harmful materials? In particular, the toxic materials contained in landfilled computers, monitors, and other electronic equipment pose health threats that range from birth defects to cancer. Because of consistent failure to adhere to proper disposal procedures, the United States is facing a crisis created by record amounts of electronic waste. e-Waste is now the fastest growing waste category in the world. The Environmental Protection Agency (EPA) estimates that American consumers and businesses will dispose of two million tons of used electronics this year alone, including 133,000 PC’s daily. Even though the EPA requires entities that generate more than 220 pounds of monitor waste per month (approximately 10 monitors) to handle the equipment as hazardous waste, and bans the dumping in landfills, less than 10% of all e-waste is being recycled. Where is it all going? Unfortunately, up to 80% of the tech trash generated in the U.S. is shipped to dumping grounds overseas, a practice which is currently legal. Many so-called recycling companies simply load electronics onto shipping containers and send them to less-developed countries like China, Nigeria and Vietnam. While many argue this used computer equipment will help spur development in these countries, the reality is the majority of the equipment is truly useless. It is simply cheaper to dump it overseas than to properly recycle it in the U.S. Consequently, serious health issues stemming from third-world tech dumps are rampant and on the rise. While an increasing number of corporate executives are insisting on contracting with disposal partners who will act responsibly by recycling in the U.S., many more struggle with finding a cost-effective solution that complies with the law. Perhaps the biggest problem facing executives today is deciphering the disposal and recycling laws, which differ greatly on a federal, state and local level. The Feds vs. The States On a federal level, the EPA considers most computer equipment hazardous waste. The EPA can levy fines on companies that fail to meet certain requirements for the accumulating, handling, shipping and disposing of computers. Criminal sanctions can even be brought against executives who knowingly fail to plan for proper disposal. But, while the EPA tells us what not to do, it offers no formal plan showing how to handle computer waste. It is on the state level that a new battle is being fought, as states have the ability to enforce more stringent regulations than the EPA. The central issue goes from how to recycle these materials, to who should pay for it. Four states—California, Maine, Maryland and Washington—already have laws in place. Nineteen others are hotly debating the topic within their state legislatures. The argument usually falls into one of two camps: require consumers to subsidize recycling by paying a fee at the point of sale, or place the burden of cost onto the original manufacturer. Whether the burden falls on the consumer or the manufacturer, these states are sending a clear message: An infrastructure must be built to give businesses and individuals alike a defined method for disposing their used computers. However, keeping up with different laws can be confusing for companies with inter-state operations. Many are hoping for a federal electronics recycling program, but until that happens, there are some basic steps companies can take to help protect themselves. First and foremost, companies must appoint a disposal coordinator, one person with ultimate responsibility for taking reasonable steps towards proper computer disposal. This individual would be responsible for drafting a written plan regarding proper disposal procedures that everyone, including state or EPA regulators, could refer to. In addition to learning local laws everywhere his or her company does business, this coordinator should conduct due diligence on any and all third parties used to dispose of equipment. During this process, the coordinator needs to ask plenty of questions: By asking these questions, companies can find a disposal partner that not only follows state and EPA guidelines, but also one committed to the proper handling of electronic scrap. Our children are doing such an outstanding job of sorting and recycling. Let’s hope that we can decipher the myriad recycling legislation, because after all, we grown-ups are supposed to be the ones leading by example. Jonathan Zigman is a senior vice president for St. Louis-based CSI Leasing, an independent IT leasing company that also sanitizes, resells and recycles approximately 10,000 pieces of equipment each month.
<urn:uuid:88bf1db3-29ee-46e4-a446-603f11a6585f>
CC-MAIN-2022-40
https://cioupdate.com/recycling-e-waste-who146s-responsible/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00314.warc.gz
en
0.948336
987
3.0625
3
Publicado el agosto 12, 2021 Rural fiber distribution solutions — PON technologies In a previous blog post, we looked at asymmetric power splitting (connectorized unbalanced splitters or tapered splitters) as one of the PON technologies that service providers use to deliver cost-effective fiber-to-the-home broadband access to underserved rural areas. In this post, we’ll look at a similar point-to-multipoint technology used to achieve the same goal: spliced optical distributed taps. One key to the cost savings associated with distributed taps, as claimed by their suppliers, relates to the reduced amount of fiber cabling (distribution and drop cables) required as compared with centralized symmetric power splitter network configurations. Passive distributed taps, as shown below, implement a linear or serial daisy-chain architecture that will be familiar to engineers and technicians with experience in hybrid fiber cable (HFC) topologies. While coaxial cable taps work in the electrical domain, and the distributed taps under discussion here work in the optical domain, the basic signal tapping theory of operation is similar. This functional similarity translates into lower training costs for network operators whose technicians are familiar with electrical coaxial components and technology. Distributed taps comprise optical couplers, which divert a portion of the input light power, and optical splitters, which equally split this diverted light into the drop outputs. The example below shows two cascaded 1:2 splitters to create four drop outputs. Note that the first tap extracts 10% of the light power, and the second tap extracts 20% of the remaining light power in this simplified example. Another advantage of distributed taps is reduced splicing overhead, with only one fiber cut and two splices required to add each tap. The small physical size of tap terminals allows them to be mounted aerially, avoiding the need for equipment cabinets, mounting pads, labor costs and the associated local government approvals for this infrastructure. Distributed taps have two important optical performance parameters that impact end-to-end optical loss budgeting. The first is the tap value or drop loss, and specifies the optical power extracted by the tap, between the input distribution fiber and the output drop fibers (the remaining optical power passes to the next tap in the PON). All the drop ports in each tap along the link (four in the example above) have the same drop loss. Taps closer to the optical line terminal (OLT) extract less optical power from the trunk than taps further from the OLT, in order to allow these PONs to have maximum reach; this is particularly important in dispersed rural areas. The second performance characteristic is tap insertion loss, the amount of light power (both typical and maximum) diverted from the input distribution fiber to the output distribution fiber. The final tap in the PON is the terminating tap, and it has no output fiber port, as all the light energy passes to the drop ports. Here is a simplified illustration of the downstream path using typical loss values to help make these parameters clearer. Note that the attenuation of all connectors (below 0.75 dB for SC APC per IEC standards), fusion splices (below 0.3 dB per IEC standards), and the fiber itself (about 0.2 dB per km at 1490 nm) must be included in an actual engineering optical loss budget accounting. A key takeaway here is the available range of tap values (drop losses) from their suppliers allows the network engineer to manage the optical link budget so that ONTs near and far from the OLT receive adequate transmit light signal power. This design methodology also manages the 1310 nm upstream signal power from the ONTs back to the OLT. With distributed tap PON architecture, along with tapered splitter PON architecture, service providers have two options to deliver fiber broadband services to underserved rural areas cost effectively. EXFO’s broad portfolio of fiber test tools, including PON power meters, optical fiber multimeters, inspection scopes, and OTDRs with iOLM, are perfectly suited to validate and maintain all types of passive optical networks.
<urn:uuid:328c4b18-415a-40fa-80d4-9573a4f7383d>
CC-MAIN-2022-40
https://www.exfo.com/es/recursos/blog/rural-fiber-distribution-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00314.warc.gz
en
0.901767
846
2.625
3
Smart appliances that communicate with each other and share information using the internet are already available. Indeed, the Internet of things has featured on lists of game-changing technologies for years, and is arguably one of the most hyped tech innovations around. Experts believe the next big wave of technology lies within the Internet of Things, which refers to things that connect to the Internet like TVs, refrigerators, garage doors and even cars that collect data about the way we live. And more and more companies are investing big on taking this data and packaging in ways that are meaningful to businesses and ultimately, the average person. This includes everything from smart thermostats and garage doors to toothbrushes, tennis racquets and even your bed. They collect data about how you use them, learn your habits, typically connect to an app and give you feedback to improve your lifestyle (or your racquet swing, for example). With sensors getting cheaper by the day, more and more physical objects are becoming part of a network of things changing the way we live and work. Why now? Enablers of the IoT A number of significant technology changes have come together to enable the rise of the IoT. These include the following. Big Data and predictive analytics: The net result of millions of machines communicating with each other – sensors constantly sending data, people being connected all the time – will inevitably be an explosion in data. Storing, analyzing and making use of this data is key. Smartphones: Smartphones are now becoming the personal gateway to the IoT, serving as a remote control or hub for the connected home, connected car, or the health and fitness devices consum ers are increasingly starting to wear. Ubiquitous wireless coverage: With Wi-Fi coverage now ubiquitous, wireless connectivity is available for free or at a very low cost, given Wi-Fi utilizes unlicensed spectrum and thus does not require monthly access fees to a carrier. Machine to machine: The machines can even “reach out” proactively to consumers and suppliers. In the future, the work piece carries the information of “what it wants to be at the end” and the machines simultaneously process and route the work piece based on capabilities and availability. Tagging Things: (NFC, QR Codes, Digital Watermarking, etc.) Sensing Things: (such as botanicals which is technology that reacts to moisture, smart textiles and smart pavement) Shrinking Things: (making products smaller, yet smarter) Thinking Things: (objects that access semantic web and open cloud data to customize things) The emergence of Subnets of Things: Islands of connected devices will emerge, driven either by a single point of control, single point of data aggregation, or potentially a common cause or technology standard. These ‘Subnets of Things’ will be stepping stones towards a full Internet of Things. Cheap sensors: Sensor prices have dropped to an average 50 cents from $1.20 in the past 10 years. Cheap bandwidth: The cost of bandwidth has also declined precipitously, by a factor of nearly 45X over the past 10 years. Cheap processing: Similarly, processing costs have declined by nearly 70X over the past 10 years, enabling more devices to be not just connected, but smart enough to know what to do with all the new data they are generating or receiving. Mobile Computing:The bandwidth is constantly increasing, allowing us to access or transmit information at increasing rates of speed. Also important: mobile components have become affordable for everybody. Just as with CommTech, connectivity will drive the use of semiconductors to manage the communications, driven by Wi-Fi, Bluetooth, ZigBee, NFC, and other IoT standards. Cheap brains: More devices will use microcontrollers or low-cost microprocessors given their lower price points and power requirements relative to traditional semiconductor architecture. ‘‘The Internet of Things,” has everyday devices equipped with sensors and connectivity to work together, understand what we’re doing, and operate automatically to make our lives easier. And, of course, we’ll be able to control and configure it all, likely with our tablets and smartphones, or by speaking. After all, Siri and Google Now have taken voice recognition mainstream. Smart devices use Internet technologies like Wi-Fi to communicate with each other, your laptop, and sometimes directly with the cloud. Some also talk to a central hub that serves as control point for many different devices, like the Revolv. Ideally, owners can use that central access point from their smartphones and tablets, either at home or when they’re out and about. In the past, some of these devices were wired together into more complex systems. But it wasn’t until they were provided with some intelligence, connected to the Internet, and empowered by a new wave of technological accessibility—through cloud computing, smartphones, and the prototyping capabilities of digital fabrication—that the IoT came into being As new and challenging as today’s IoT is, it offers a large and wide-open playing field. The companies that gain the right to win in this sphere will be those that understand just how disruptive the IoT will be, and that create a value proposition to take advantage of the opportunities.
<urn:uuid:63983887-c2a5-4b52-80cb-227a0347f7e0>
CC-MAIN-2022-40
https://enterprisetechsuccess.com/article/A-Peek-into-the-Future-'Internet-of-Things'/SVJ1ZDVNRUgyaEtXSXJucUhKditLdz09
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00314.warc.gz
en
0.936194
1,088
3
3
Network Computing Security The intention to adopt cloud computing has increased rapidly in many organizations. Cloud computing offers many potential benefits to small and medium enterprises such as fast deployment, pay-for-use, lower costs, scalability, rapid provisioning, rapid elasticity, ubiquitous network access, greater resiliency, and on-demand security controls. Despite these extraordinary benefits of cloud computing, studies indicate that organizations are slow in adopting it due to security issues and challenges associated with it. In other words, security is one of the major issues which reduces the cloud computing adoption. Hence, cloud Service Providers should address privacy and security issues as an urgent priority and develop efficient and effective solutions. Cloud computing utilizes three delivery models (SaaS, PaaS, and IaaS) to provide infrastructure resources, application platform and software as services to the consumer. These service models need different level of security in the cloud environment. According to Takabi et al. (2010), cloud service providers and customers are responsible for security and privacy in cloud computing environments but their level of responsibility will differ for different delivery models. Infrastructure as a Service (IaaS) serves as the foundation layer for the other delivery models, and a lack of security in this layer affects the other delivery models. In IaaS, although customers are responsible for protecting operating systems, applications, and content, the security of customer data is a significant responsibility for cloud providers. In Platform as a service (PaaS), users are responsible for protecting the applications that developers build and run on the platforms, while providers are responsible for taking care of the users’ applications and workspaces from one another. In SaaS, cloud providers, particularly public cloud providers, have more responsibility than clients for enhancing the security of applications and achieving a successful data migration. In the SaaS model, data breaches, application Vulnerabilities and availability are important issues that can lead to financial and legal liabilities. Infrastructure Security Levels Bhadauria and his colleagues (2011) conducted a study on cloud computing security and found that security should be provided at different levels such as network level, host level, application level, and data level. Network Level Security: All data on the network need to be secured. Strong network traffic encryption techniques such as Secure Socket Layer (SSL) and the Transport Layer Security (TLS) can be used to prevent leakage of sensitive information. Several key security elements such as data security, data integrity, authentication and authorization, data confidentiality, web application security, virtualization vulnerability, availability, backup, and data breaches should be carefully considered to keep the cloud up and running continuously. Application level security Studies indicate that most websites are secured at the network level while there may be security loopholes at the application level which may allow information access to unauthorized users. Software and hardware resources can be used to provide security to applications. In this way, attackers will not be able to get control over these applications and change them. XSS attacks, Cookie Poisoning, Hidden field manipulation, SQL injection attacks, DoS attacks, and Google Hacking are some examples of threats to application level security which resulting from the unauthorized usage of the applications. Majority of cloud service providers store customers’ data on large data centres. Although cloud service providers say that data stored is secure and safe in the cloud, customers’ data may be damaged during transition operations from or to the cloud storage provider. In fact, when multiple clients use cloud storage or when Multiple Devices are synchronized by one user, data corruption may happen. Cachin and his colleagues (2009) proposed a solution, Byzantine Protocols, to avoid data corruption. In cloud computing, any faults in software or hardware that usually relate to inappropriate behavior and intrusion tolerance are called Byzantine fault tolerance (BFT). Scholars use BFT replication to store data on several cloud servers, so if one of the cloud providers is damaged, they are still able to retrieve data correctly. In addition, different encryption techniques like public and private key encryption for data security can be used to control access to data. Service availability is also an important issue in cloud services. Some cloud providers such as Amazon mentions in their licensing agreement that it is possible that their service is not available from time to time. Backups or use of multiple providers can help companies to protect services from such failure and ensure data integrity in cloud storage. By Mojgan Afshari Mojgan Afshari is a senior lecturer in the Department of Educational Management, Planning and Policy at the University of Malaya. She earned a Bachelor of Science in Industrial Applied Chemistry from Tehran, Iran. Then, she completed her Master’s degree in Educational Administration. After living in Malaysia for a few years, she pursued her PhD in Educational Administration with a focus on ICT use in education from the University Putra Malaysia. She currently teaches courses in managing change and creativity and statistics in education at the graduate level.
<urn:uuid:ce5d516d-bc5e-48e7-9ec4-175a84439d05>
CC-MAIN-2022-40
https://cloudtweaks.com/2014/07/computing-security-network-application-levels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00314.warc.gz
en
0.936673
1,005
2.71875
3
What is process mining? Process mining applies data science to discover, validate and improve workflows. By combining data mining and process analytics, organizations can mine log data from their information systems to understand the performance of their processes, revealing bottlenecks and other areas of improvement. Process mining leverages a data-driven approach to process optimization, allowing managers to remain objective in their decision-making around resource allocation for existing processes. Information systems, such as Enterprise Resource Planning (ERP) or Customer Relationship Management (CRM) tools, provide an audit trail of processes with their respective log data. Process mining utilizes this data from IT systems to create a process model, or process graph. From here, the end-to-end process is examined, and the details of it and any variations are outlined. Specialized algorithms can also provide insight into the root causes of deviations from the norm. These algorithms and visualizations enable management to see if their processes are functioning as intended, and if they aren’t, they arm them with the information to justify and allocate the necessary resources to optimize them. They can also uncover opportunities to incorporate robotic process automation into processes, expediting any automation initiatives for a company. Process mining focuses on different perspectives, such as control-flow, organizational, case, and time. While much of the work around process mining focuses on the sequence of activities—i.e. control-flow—the other perspectives also provide valuable information for management teams. Organizational perspectives can surface the various resources within a process, such as individual job roles or departments, and the time perspective can demonstrate bottlenecks by measuring the processing time of different events within a process. In 2011, the Institute of Electrical and Electronics Engineers (IEEE) published the Process Mining Manifesto (PDF, 9.6 MB) (link resides outside IBM) in an effort to advance the adoption of process mining to redesign business operations. While proponents of process mining, like the IEEE, promote its adoption, Gartner notes that market factors will also play a role in its acceleration. Digital transformation efforts will prompt more investigation around processes, subsequently increasing the adoption rate of new technologies, such as artificial intelligence, task automation, and hyperautomation. The pace of these organizational changes will also require businesses to apply operational resilience to adapt as well. As a result, enterprises will increasingly lean on process mining tools to achieve their business outcomes. Types of process mining Wil van der Aalst, a Dutch computer scientist and professor, is credited with much of the academic research around process mining. Both his research and the above-mentioned manifesto describe three types of process mining, which are discovery, conformance, and enhancement. Discovery: Process discovery uses event log data to create a process model without outside influence. Under this classification, no previous process models would exist to inform the development of a new process model. This type of process mining is the most widely adopted. Conformance: Conformance checking confirms if the intended process model is reflected in practice. This type of process mining compares a process description to an existing process model based on its event log data, identifying any deviations from the intended model. Enhancement: This type of process mining has also been referred to as extension, organizational mining, or performance mining. In this class of process mining, additional information is used to improve an existing process model. For example, the output of conformance checking can assist in identifying bottlenecks within a process model, allowing managers to optimize an existing process. Process mining vs. data mining vs. business process management Process mining sits at the intersection of business process management (BPM) and data mining. While process mining and data mining both work with data, the scope of each dataset differs. Process mining specifically uses event log data to generate process models which can be used to discover, compare, or enhance a given process. The scope of data mining is much broader, and it extends to a variety of data sets. It is used to observe and predict behaviors, having applications within customer churn, fraud detection, and market basket analysis to name a few. Process mining takes a more data-driven approach to BPM, which has historically been managed more manually. BPM generally collects data more informally through workshops and interviews, and then uses software to document that workflow as a process map. Since the data that informs these process maps is more qualitative, process mining brings a more quantitative approach to a process problem, detailing the actual process through event data. Why is process mining important? Increasing sales isn’t the only way to generate revenue. Six sigma and lean methodologies also demonstrate how the reduction of operational costs can also increase your return-on-investment (ROI). Process mining helps businesses reduce these costs by quantifying the inefficiencies in their operational models, allowing leaders to make objective decisions about resource allocation. The discovery of these bottlenecks can not only reduce costs and expedite process improvement, but it can also drive more innovation, quality, and better customer retention. However, since process mining is still a relatively new discipline, it still has some hurdles to overcome. Some of those challenges include: - Data Quality: Finding, merging and cleaning data is usually required to enable process mining. Data might be distributed over various data sources. It can also be incomplete or contain different labels or levels of granularity. Accounting for these differences will be important to the information that a process model yields. - Concept drift: Sometimes processes change as they are being analyzed, resulting in concept drift. Process mining use cases Process mining techniques have been used to improve process flows across a wide variety of industries. Since process maps highlight the key performance indicators (KPIs) which impact performance, they have spurred businesses to reexamine their operational inefficiencies. Some use cases include: - Education: Process mining can help identify effective course curriculums by monitoring and evaluating student performance and behaviors, such as how much time a student spends viewing class materials. - Finance: Financial institutions have used process mining software to improve inter-organizational processes, audit accounts, increase income, and broaden its customer base. - Public works: Process mining has been used to streamline the invoice process for public works projects, which involve various stakeholders, such as construction companies, cleaning businesses, and environmental bureaus. - Software Development: Since engineering processes are typically disorganized, process mining can help to identify a clearly documented process. It can also help IT administrators monitor the process, allowing them to verify that the system is running as expected. - Healthcare: Process mining provides recommendations for reducing the treatment processing time of patients. - E-commerce: It can provide insight into buyer behaviors and provide accurate recommendations to increase sales. - Manufacturing: Process mining can help to assign the appropriate resources depending on case—i.e. product—attributes, allowing managers to transform their business operations. They can gain insight into production times and reallocate resources, such as storage space, machines, or workers, accordingly. Process mining and IBM Process mining is just one part of modernizing your organization as the need for automation widens across business and IT operations. A move toward greater automation should start with small, measurably successful projects, which you can then scale and optimize for other processes and in other parts of your organization. Working with IBM, you’ll have access to the AI-powered automation capabilities of IBM Cloud Pak® for Business Automation, including prebuilt workflows, to help accelerate innovation by making every process more intelligent. Take the next step: - Easily get all the insights you need into how your business processes are performing from existing data that resides in your IT systems and desktops with the IBM Process Mining solution. - IBM Process Mining can also come integrated with automation capabilities such as RPA, workflow, and process modeling as part of the IBM Cloud Pak® for Business Automation. - Explore the potential business value of adopting the IBM Cloud Pak for Business Automation through the analysis of an actual client’s cost savings and realized business benefits in the Forrester TEI study. - Get to the truth about the urgency, value, opportunities and limitations related to automating work by reading g The quick and practical guide to digital business automation (PDF, 880 KB). IBM Cloud Pak for Business Automation is a flexible set of integrated software that helps you design, build and run intelligent automation services and applications on any cloud, using low-code tools. IBM Process Mining reveals and tackles inefficiencies that are affecting the performance of your business processes. IBM Process Mining uses process and task mining technology to provide complete visibility of your processes with fact-based insights derived from existing business data to help you audit, analyze and optimize existing business processes. Integrating process mining capabilities into the IBM Cloud Paks for Automation will enable your enterprise to optimize operational processes and functionality in the following ways: - Pinpoint activities that should be automated (For example, by an RPA bot), simulate the impact on to-be processes before investing, and automatically generate RPA bots. - Easily trigger corrective actions from data-driven process insights, like sending timely notifications via email or paying overdue invoices, so that you can focus on the most relevant work more efficiently. - Gain complete process transparency into how your processes are running and tackle inefficiencies or compliance issues quickly. - Wrap AI around mining results and use machine learning to identify patterns and predict future risks. - Infuse intelligent, fact-based insights into key decisions to accelerate digital transformation and process improvement initiatives. Get started with an free 30-day IBM Process Mining trial today.
<urn:uuid:3a58a4f2-a327-460e-a3f7-40623498eaed>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/process-mining
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00514.warc.gz
en
0.928789
2,002
2.671875
3
Imaging breakthrough could aid development of quantum microscopes (Phys.org) A breakthrough in quantum imaging could lead to the development of advanced forms of microscopy for use in medical research and diagnostics. A team of physicists from the University of Glasgow and Heriot-Watt University have generated images by finding a new way to harness a quantum phenomenon known as Hong-Ou-Mandel (HOM) interference. Named after the three researchers who first demonstrated it in 1987, HOM interference occurs when quantum-entangled photons are passed through a beam splitter—a glass prism which can turn a single beam of light into two separate beams as it passes through. Inside the prism, the photons can either be reflected internally or transmitted outwards. That dip is the Hong-Ou-Mandel effect, which demonstrates the perfect entanglement of two photons. It has been put to use in applications like logic gates in quantum computers, which require perfect entanglement in order to work. It has also been used in quantum sensing by putting a transparent surface between one output of the beam splitter and the photodetector, introducing a very slight delay into the time it takes for photons to be detected. Sophisticated analysis of the delay can help reconstruct details like the thickness of surfaces. Now, the Glasgow-led team has applied it to microscopy, using single-photon sensitive cameras to measure the bunched and anti-bunched photons and resolve microscopic images of surfaces. In the Nature Photonics paper, they show how they have used their setup to create high-resolution images of some clear acrylic sprayed onto a microscope slide with an average depth of 13 microns and a set of letters spelling ‘UofG’ etched onto a piece of glass at around 8 microns deep. Their results demonstrate that it is possible to create detailed, low-noise images of surfaces with a resolution of between one and 10 microns, producing results close to that of a conventional microscope. “Now that we’ve established that it’s possible to build this kind of quantum microscopy by harnessing the Hong-Ou-Mandel effect, we’re keen to improve the technique to make it possible to resolve nanoscale images. It will require some clever engineering to achieve, but the prospect of being able to clearly see extremely small features like cell membranes or even strands of DNA is an exciting one. We’re looking forward to continuing to refine our design.” Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona.
<urn:uuid:5db3bd7d-878e-4fbf-80fb-703e506c1655>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/imaging-breakthrough-could-aid-development-of-quantum-microscopes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00514.warc.gz
en
0.934202
555
3.578125
4
Healthcare IT deals with the use of electronic health records (EHRs) instead of paper medical records to better supervise patient care through protected use and sharing of health information. In recent years, the government has been driving the adoption of healthcare IT by enacting HIPAA. The act’s standards are intended to improve the efficiency of the American health care system and push the use of electronic health records in the nation’s health care system. Healthcare IT allows healthcare providers to receive accurate and complete information about a patient’s health. It also allows patients to securely access their medical records over the internet. The use of healthcare IT can ultimately make health care systems more efficient, decrease medical errors, and supply more reliable care at reduced costs. Healthcare IT makes it possible for health care providers to better manage patient care through secure use and sharing of health information. This includes the use of electronic health records instead of paper medical records to maintain people’s health information.
<urn:uuid:8d94a05a-45c7-4833-b132-f04ec593290f>
CC-MAIN-2022-40
https://cyrusone.com/resources/tools/healthcare-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00514.warc.gz
en
0.929698
202
3.34375
3
ocean wave energy A team of researchers want to make the ocean an affordable source of renewable energy. Engineers at the Okinawa Institute of Science and Technology have already harnessed the energy of ocean currents using underwater turbines. Now, the group targeting the kinetic power of waves. The team preparing to install turbines where the energy of the ocean is most apparent. Particularly in Japan, if you go around the beach you’ll find many tetrapods, said, Tsumoru Shintake, a professor at OIST. Tetrapods are pyramid-like concrete structures designed to dampen the force of incoming waves and protect beaches from erosion. Researchers plan to replace tetrapods with turbines designed to convert wave energy into electricity. “Surprisingly, 30 percent of the seashore in mainland Japan covered with tetrapods and wave breakers,” Shintake said. Using just 1% of the seashore of mainland Japan generates about 10 gigawats of energy, which is equivalent to 10 nuclear power plants. Wave Energy Converter Shintake and his colleagues began designing their turbine prototype in 2013. They named the technology the Wave Energy Converter (WEC). The turbines designed to withstand the force of large waves generated by typhoons and big ocean storms. The turbine’s blades inspired by dolphin fins strong but flexible. The poston which the turbine mounted is also flexible like a flower. It designs to bend but not break. The stem of a flower bends back against the wind. WECs rise just above the sea’s surface when the ocean calm, but will submerged by onrushing waves. Because the blades only spin but so fast, fish who get caught will be able to escape. The engineers are now preparing to install their first prototypes, half-scale models, in the ocean. The turbines will power LED fixtures to demonstrate their potential. Shintake said, I hope these turbines will work hard quietly, and nicely, on each beach on which they installed. More information: [OIST]
<urn:uuid:2480b84c-7eec-4c73-aa01-485ba8c7bb0c>
CC-MAIN-2022-40
https://areflect.com/2017/09/23/japan-scientists-plans-to-turn-ocean-wave-energy-into-electricity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00514.warc.gz
en
0.916984
424
3.671875
4
There is no question that plastic surrounds us at every given moment. In every imaginable form, the material is in our home appliances, food packaging, electronics, and so much more. We come in contact with plastic on a daily basis and typically don’t think much about it. However, plastic is now filling up all other spaces we can think of — like oceans and landfills. And now it seems to have made its way onto our dinner plates. A new U.K. study shows that plastics extracted from e-waste — the type of plastic that comes in contact with dangerous heavy metals and toxic compounds — is used in black plastic food containers; i.e. the plastic containers used for take out and frozen food items. The term ‘black plastic,’ is exactly as it sounds — plastic material colored with carbon black which is in many cases produced by burning petroleum byproducts. This material is also not recyclable by regular means: the recycling process we send most of our plastics through works by using near-infrared light radiation to break down the material so we can make it into something else. But, black plastic does not break down under the low-intensity radiation that works on clear or lighter colored plastic bottles. Which means that your average recycling facility in the U.K. or U.S. can’t process black plastic as easily as it can other items. And instead of finding efficient ways of responsibly dealing with the material, more often than not, the material heads for landfills. Essentially, black plastic is very harmful to the environment. Not only can it not be broken down easily for recycle, but reuse isn’t much of an option as the material cannot be re-dyed. That is, unless you’re looking to reuse e-waste materials. Black is typically associated with being modern and sleek, therefore it makes sense that the color would be heavily integrated in technology. With a color that is always ‘in vogue,’ how can you go wrong, right? From our televisions to our desktops, laptops, and basically all other electronic devices – black plastic is a key material. However, e-waste is full of valuable and often toxic materials. The plastic that’s used in electronics is treated with chemicals like bromine and antimony and heavy metals like lead, to make it flame-retardant and otherwise suitable for electronics. Once plastic material is used in the creation of electronics, the plastic is really only suppose to be recycled back into electronics. However, some recyclers try to find other methods of recycling black plastic however unhealthy and dangerous they may be. Some recyclers, many of whom are located in China or in the Middle East, are suppose to separate the plastic according to whether it’s safe or not to be reused in consumer applications. Yet, as we can see with food containers made from black plastic, this is not happening. Environmental scientist from the University of Plymouth who authored the study, Andrew Turner looked at more than 600 individual items. To figure out what each one contained, he bombarded them with X-rays and studied the reflected light. Different compounds light up at different wavelengths, so he could tell what each coffee mug, clothes hanger, and toy car contained. The results weren’t great: he found more than the legal limits of worrisome chemicals and heavy metals in a number of the items. Given the global supply chains that give us our electronics, toys, and other plastic goods, Turner says he would expect to find similar results in the United States. At the moment, there’s no way to know exactly where in the supply chain the black plastic from e-waste is entering the manufacturing process for non-electronic household items, Turner says. He’s hoping that his future research will help them understand the process as well as figure out whether the harmful substances are leaching out and being ingested or absorbed by consumers. Regardless, the idea of consuming traces of electronic waste as we enjoy a convenient meal is very concerning.
<urn:uuid:d9045234-e157-48e7-a897-d69ac9335a1d>
CC-MAIN-2022-40
https://hobi.com/would-you-like-a-side-of-e-waste-with-your-meal/would-you-like-a-side-of-e-waste-with-your-meal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00514.warc.gz
en
0.960725
840
3.5
4
It’s safe to state that digital innovation has already paved the way for everyday technologies, such as the laptop computers, the Internet, mobile phones and tablets, social media and cloud computing, and it keeps bringing more new technologies, such as the most recent arrivals that include 3D printing, immersive computing, virtual reality (VR) and augmented reality (AR), while Artificial Intelligence (AI) and robotics are beginning to reach their long-promised and over-hyped potential. Whenever it is stated that, from a technological perspective, such innovations are quite disruptive, it must be understood that the way digital innovation has transformed the way companies do business and the way is influencing every aspect of human activity is a true and ongoing revolution. The new digital world is drastically changing how people live, communicate, manufacture, produce, learn, collaborate and make decisions, even the most critical ones, in a way that is breaking down traditional barriers to entering markets, such as access to capital for innovation, distribution, customers, information and talent. In many different aspects, the word “digital” fills a vacuum. By translating the technological changes faced by the business community, the term “digital” may make them more tangible, however, no standard form or meaning of “digital” has yet emerged, for it serves a variety of applications. The shift in understanding and in capabilities required to successfully address digitalization goes way beyond a simple technological question, as it requires most aspects of the organization to be challenged, including the functioning of boards and C-suites, and that is a frightening task, unprecedented both in scope and speed in the history of management and governance of organizations. The point is, there is no digital solution or definition that will suit all initiatives within organizations. According to an INSEAD survey, these are the 10 most important digital-related initiatives organizations currently have and how those are defined: Implementation of digital platforms aimed at facilitating collaboration with suppliers, partners and/or customers 2. Cost Efficiencies Digital initiatives aimed at reducing an organization’s costs 3. Customer Engagement Initiatives primarily focused on allowing an organization to engage more frequently, or differently, with consumers using digital as a facilitator 4. Data Management & Analytics Initiatives aimed at facilitating the gathering and manipulation of internal or external data to improve an organization’s strategic decision making 5. Customer Experience Implementation of new initiatives, processes, and digital platforms and tools aimed at creating, managing and/or measuring cross-channel customer experiences 6. Digital Marcom The use of various digital techniques, channels and platforms to build a brand, communicate and/or promote an organization, its products or services 7. Sales & Marketing Creation of new digital sales & marketing initiatives, or digitizing the organization’s existing sales, marketing and customer service processes (excluding communications) 8. Digital Transformation A fundamental transformation of the organization across the value chain and its functions, impacting the business model and all touch-points with consumers, suppliers and collaborators 9. Administrative Solutions Initiatives aimed at digitizing the organization’s administrative activities 10. Business Model Strategic initiatives aimed at changing the organization’s business model, from its operating model to its infrastructure, including what it sells, to whom, and how it goes to market For leaders contemplating their digital journey, it’s necessary not only establishing the necessary technology, needs and expectations, but mainly understanding and measuring the impact of digital transformation on the strategy that their organizations decided to set and follow for their businesses. Therefore, here are some suggested guidelines for that. Digital business strategies are continuous processes Not too long ago, there were not so many sudden major business trends, while strategic choices were limited and discrete. There was a good time window for organizations and their boards to formulate their core strategies, which would shift their focus for a few years to implement them, as well as review them. In this environment, the approach to the strategy phases could be repeated every five years or so. The impact of digital is changing the business reality faced by executives within organizations, once major trends are arriving rapidly, creating multiple strategic choices and hypotheses, while the window for the decision-making process is shrinking just as rapidly given the constant trend overhauling and products/services time to market. The time when executives could dedicate months to define a five-year strategy, validate all assumptions before spending the rest of the five years overseeing its executions is in the past. Executives would better benefit by adapting their process of strategy development as suggested below given the deep and irreversible changes within the business environment brought by digitalization: - Continuous engagement: the strategy formulation process must be continuously reviewed and engages by the board of organizations, once one-off approvals won’t be enough given the speed of market changes. - Continuous examination: continuously understanding the external market trends and confronting it with the internal capabilities is a necessary continuous process to ensure the strategy’s efficiency - Merging formulation and execution: scanning, evaluating, formulating and implementing a strategy are increasingly a seamless, continuous and simultaneous process. - Cross-level collaboration: all levels from different departments within the organization must understand and validate the different phases of formulation and execution for any strategy to be efficient. - Strategic digital enablement: executives must go beyond merely predicting the next big technology trend or managing different digital initiatives, but also embrace digital opportunities across all dimensions of the business, so their organizations can understand their full strategic potential. The strategy formulation and execution processes are no longer step-by-step processes, but they became continuous exercises, involving constant engagement of the board, as well as two-way communication across different hierarchic levels and functional structure to achieve its full efficiency potential. Think business first when facing technology disruption Any organization operating in today’s market, within any given industry, geography or segment faces a near-constant and imminent technological disruption that can either empower or derail a CIO’s corporate IT strategic plan. Whenever CIOs are planning and deciding on how to respond to new disruptions, they should first consider the business impact those and carefully taking all aspects in consideration before adopting and implementing any new disruptive technology, and challenges will vary from industry to industry. In today’s digital market, CIOs ensure that they’re investing in the right technology to drive their business forward by strongly and deeply understanding how their businesses operate, how people work and consumers utilize their businesses. Within organizations, new technologies and innovation are coming from everywhere, not only from IT anymore, so, by having a truly firm understanding of the business, how technology works within the business and how it affects everybody that’s using, CIOs will have the necessary information to build an efficient and appropriated strategy that aligns new technologies with their businesses’ needs, mitigating possible investment issues. It’s a fact that CIOs will not be accurate and efficient until they have that core understanding, so they can better decide and plan when to implement new technology and test it, or if the right path is to keep up with the investments made in technologies already tested and simply enhance those rather than jumping into a new one. No wonder that CIOs must pay attention to the large number of disruptors in the market, for some of these technological disruptions are truly good and useful for their businesses’ needs, but they must to make sure that those doesn’t disrupt their businesses as well, and that mistake can be fatal. Understand the level of engagement in the key areas of digital transformation Whether an organization has a digital strategy in place or not, strategic CIO must understand that certain changes and actions are necessary for this organization to properly become digitally enabled. It’s true and predictable that most organizations are highly engaged in some aspects of digital transformation, but interestingly there has been reported in many research notes and surveys that there is a discrepancy among the level of engagement and actual organization-wide engagement in digital. This is very well reflected that a recent INSEAD study, in which 65 percent of respondents informed that their organizations are changing their business models due to changes in the business environment caused by digital transformation, but in contrast to that, when asked what their key digital initiatives were, only 9 percent of respondents mentioned having initiatives for such changes. In this same study, another interesting finding was that all respondents reported a high level of confidence in their engagement within the key areas that are responsible for digital transformation, except for technology and people training, yet only 5 percent had previously indicated that they were actually engaged in digital transformation. Such findings show that there is a great focus on business processes over marketing and communications initiatives, but a great difficulty of navigating complex scenarios that include people and technology. No wonder changing an organizational culture is always a challenge, but degree to which people play a crucial role in digital transformation is also a very important topic to be watched, once companies need people that like changes more than the average human being to be successful in this new digital environment, once this is now a fast-paced and ever-changing environment. With that said, CIOs must understand that the level of engagement that organizations have towards digital transformation does not reflect the same levels of the key areas and, by understanding these differences, they can more accurately address issues and difficulties from each key area in order to better promote the benefits of digital transformation and become what organizations currently need the most: a true digital agent. Boldly explore new markets Technology increases the ability of companies to explore and innovate, but most of a company’s activities are still quite focused on playing to its market strengths and exploiting its core business competences, which is widely accepted by stakeholders and boards as the proper approach to produce the expected return on investment, but that is not enough anymore. In today’s unpredictable and fast business environment, companies must not only to stick to its core business but also to expend considerable effort on exploring new business models and revenue streams to better diverse their revenue sources and mitigate potential loss risks. In order to be successful in this new business exploration, organizations must rethink themselves and move away from the established management techniques and continuously engage in acquiring new capabilities, therefore shifting from a traditional “plan-to-execute” mindset to one that also plans to evaluate, learn and adapt. This necessity of exploration through continuous learning and experimentation pushes CIOs and other members of companies’ boards into a point of accepting higher levels of uncertainty, ambiguity and risk. They must make sure that the company will not rest on its comfort zone, whether these are core business, key skills or established positions. These executives must look beyond traditional KPIs to avoid becoming trapped in repeated behavior or short-term thinking, which will make the fail. Finally, the future of business is digital, and in order to survive and be successful, companies will have to digitalize themselves entirely, their processes, their language, their mindsets and their interactions. Leading that is not an easy task for anyone. However, turbulence brought by the digital age offer a unique opportunity for those who lead and govern organizations through this transformational process, as the digital economy brings unparalleled business opportunities for all companies in every single industry, introducing unpreceded innovations that will turn business into a more transparent, collaborative and customer-centered activity than ever. By leading digital transformation, CIOs have the opportunity of achieving levels of innovation, competence, effectiveness, leadership and responsibility that can bring an unseen positive impact on companies’ business success and their impact on modern society. CIOs can efficiently lead the greatest disruption in both leadership and governance in the entire history of business, it’s all in their hands.
<urn:uuid:896fd4b0-21c6-413a-86c9-72535ccbf3a4>
CC-MAIN-2022-40
https://www.cio.com/article/230104/strategic-guidelines-for-cios-to-lead-the-digital-age.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00514.warc.gz
en
0.952027
2,404
2.65625
3
Cost Based Project Selection Process The success of every project depends on the amount of time invested in it before being initiated by concerned project managers. In itself, the selection of a project is a very crucial task, and if done in the right way, the chances of its success increase manifold. There are various techniques of selecting the apt project, and the most effective and analytical one is referred to as the ‘Cost based project selection process.’ The basic object of any project is to earn maximum profits by minimising costs. This technique aims at ascertaining the costs involved in a project and helps in selecting the most economical one, which promises to generate maximum profits with relatively lesser risks. It involves various methods through which the cost of a given project can be estimated. These methods have different processes but the end goal of every method is to assist the management in pin- pointing the preeminent project. Opportunity Cost Method Opportunity cost method indicates the earnings that are sacrificed in the event of investing in a particular project. For example, if you have Rs.10000 at your disposal with the option to invest either in shares or with a fixed deposit in bank; regardless of the option chosen, the income forgone is referred to as the opportunity cost. Pay-Back Period Method Pay-back period refers to the overall period in which the total cost of a project is likely to be recovered. This is one of the easiest and widest used techniques of understanding the feasibility of a project. The project with a lesser pay-back period has to be selected for gaining optimum returns. Also known as cost-benefit ratio, this method compares the present cost involved in the project with the present inflows or earnings of the same. The project with a higher benefit ratio is given preference in the ensuing selection process. Net Present Value Method (NPV) Net Present Value method tries to eliminate every future estimated cost from the Present Value of the project, in the form of its NPV (Net Present Value). This technique of cost based project selection is also applied to figure out the best project option. The project boasting of the highest Net Present Value is desirable. Internal Rate of Return (IRR) This is the rate at which total profits become equal to the total costs of the project i.e. it is the rate at which the Net Present Value is arrived at. So the project which archives the NPV sooner than others in the race ends up having an upper hand. Estimated Cash Flow Method The estimated cash flow method projects the estimated total cash inflows and outflows over a given number of years. It offers a fair idea about the total cost that is expected to run down the years, and the total profit which the project will earn. In the end, the project with the highest spread i.e. total profit less total cost is selected. Discounted Cash Flow Method The difference between the estimated cash flow method and discounted cash flow method is that the estimated total costs and profits are adjusted to the inflation rate of a given year. This method, as compared to estimated cash flow method, gives a more vivid picture of the estimated costs, estimated profits, and the spread. Experienced project managers apply different methods to derive a real estimation of costs and profits, so that the most appropriate project can be chosen. Are you ready to do the same? Author : Uma Daga
<urn:uuid:0d34be7e-b1e8-41fb-b95f-a5182b93f7ad>
CC-MAIN-2022-40
https://www.greycampus.com/blog/project-management/cost-based-project-selection-process
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00514.warc.gz
en
0.942085
701
2.515625
3
Active and Passive PoE can be a great addition to your security system. But what is the difference between them? What is PoE? PoE means Power over Ethernet. This function transfers power and data through the same Ethernet cable. Thus, the reasoning behind using it is the lack of need for two separate cables. Usually, there is a need to use one cable for the power supply and another for the data. But Active and Passive PoE technology removes the need for more cables. Furthermore, cables can be bulky and stick out, but Active and Passive PoE hide in the ceiling. Therefore, it will be out of sight and out of mind. Lastly, this will reduce the cost of maintenance or replacing the cable and increases reliability. Using PoE allows users to run cables and power cameras in locations they otherwise wouldn’t reach due to the lack of power supply. Difference Between Active and Passive PoE Let’s go over the main differences between Active and Passive PoE when using it in your security system. A significant aspect of a PoE cable’s function is the delivered power. This is a safe way to transfer data and power to the camera simultaneously. Not only will the voltage range provided be a safe amount, but it will also open the door for communication between your cameras and recorders. This power supply of power through the PoE cables will test the connection using procedure 802.3at. If everything is set up properly and runs smoothly, the device will indicate a power supply between the sending and receiving ends of the PoE. To indicate, the devices will do a “handshake,” however, if there is a compatibility issue, the power will not be supplied. This describes the power supply that uses raw power and sends it through the ethernet cable to the connected device. The power sent through Passive PoE is unnegotiated; thus, the unit it sends power to will receive it without compatibility verification. Without a verifying procedure, the receiving unit will continue to be pumped with power. Passive PoE is known for always staying on, regardless of compatibility between both devices. On the other end of the spectrum, Active PoE sends power to devices that comply with 802.3af or 802.3at. The reason why is because, unlike the Passive PoE, it needs some sort of verification that the two devices are connected. As we mentioned before, the Passive PoE does not need verification that the device is compatible. Thus, it would help if you were extra careful with your device’s voltage so as not to fry it up. Too much voltage can hurt your device, such as a camera, and too little can keep it from working correctly. You should check this prior to connecting and powering your device through the PoE cables. There are a total of three common IEEE 802.3 that are used for security cameras, including PTZ. Let’s go over the three: - 802.3af: 15.4 W of power - 802.3at: 30 W of power - 802.3bt: Between 60-90 W of power Reduction of risk I doubt you want to fry your device’s motherboard and keep it from working. So to safely install Passive PoE, you should keep a few risk reductions in mind. First, start off by making sure that the power supply gives your camera the correct amount of voltage that it needs. Usually, this information can be found on the camera specs. Once you find the needed voltage for your device, choose a cable that will provide the said amount of power, not more, not less. For instance, most standard IP cameras can handle 15W; thus, you would pair them with an 802.3af PoE, which will provide 15.4W of power or less. This will keep your camera up and running while keeping the chances that it will be damaged relatively low. Most IP cameras only require around 9W of power, up to 15W. Therefore, pumping the camera with 90W through Passive PoE cables will be sure to damage it. So before plugging everything in, check all the camera’s specs and their needed power levels; this only applies when using Passive and not Active PoE. When you opt for Active PoE, the units, both receiver and sender will automatically check their parameters to establish a connection or “handshake” before releasing any actual electricity. By doing this, the units will both continue to work properly and significantly reduce the risk that the cable will damage either device. Furthermore, it is important to choose the right switch. If you wanna find out how to choose the best PoE switch for IP cameras, check out the article! Active and Passive PoE are great alternatives to the traditional cables found in CCTV systems. Your security system’s reliability will increase while decreasing your maintenance and installation costs. Thus, using PoE cables is an alternative way to install cameras in various hard-to-reach locations. It is recommended to opt for the Active PoE over the Passive one. Due to the “handshake” or automatic checking process that happens to ensure the voltage is compatible, the Active PoE does the work for you. Thus, you don’t have to worry about anything getting damaged. Furthermore, you should take the cumulative consumption of power into account. If a PoE switch outputs, let’s say, 50W of power, then the added need for power from the cameras must not exceed 50W. Otherwise, the exceeding need for power will draw more than available; thus, the devices will not work correctly or turn on. Active and Passive PoE are excellent choices to double up data and power into one and install your camera in your place of choice. Though Passive PoE works fine, there will be more work on your side. On the other hand, Active PoE can do the checking process for you. Making things a little more straightforward as it needs to check or “handshake” before enabling the power between the devices.
<urn:uuid:8363cb00-8912-4ecd-b632-b211f7838168>
CC-MAIN-2022-40
https://learncctv.com/difference-active-and-passive-poe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00514.warc.gz
en
0.925096
1,284
3
3
There are a number of points one must keep in mind while trying to master data science. Especially for those who are surrounding themselves with numbers and mountains of information. What’s more, the top organizations around the world are constantly in need of data science experts and specialists. So, it helps to continuously refresh your knowledge of the basics of data science. And if you’re just a beginner and have a dream of developing your potential in order to offer your knowledge to the world of science, you will require the following skills: - Accuracy in labor; - Fast calculation of the data; - Understanding of all mathematical principles; - The mathematical creativity (capability of noticing consistent patterns) Of course, there are many other skills that are necessary for being a data scientist. If you want to gain more insight into it, then you can find them here at this scientific portal. A good data scientist should always bring one’s skills to perfection. Constant acquisition of new knowledge is a part of growth upon oneself. Reading – one of the most efficient and easiest means of gaining new abilities and the Internet is the place where you can find a portion of decent books on math. Continue reading the article to get the list of the best free must-read books which are of high value for data science as they can build strong fundamental knowledge in the field and grant the answers to all the questions which a person might have. Top 8 Best Books on Statistics and Mathematics 1. Pattern Classification This is a real classic of mathematics study books. Although it was printed in 1973 and updated only in 2000, all the material that is introduced in this manual is still actively used in data science. The best thing about this book is the formation of the text in the shape of patterns. That definitely makes the recognition and memorizing of the algorithms faster and more productive. 2. Practical Statistics for Data Scientists: 50 Essential Concepts It introduces the methods of statistics through the prism of Data Science exclusively. Such an approach makes the book extremely valuable and important for all experts in the field. It will teach the readers about the importance of data analysis and methods of its application for solving various situations. You will also be introduced to basic techniques which are used for data prediction and programming of the machine. Additionally, you can find more similar books on programming with practical statistics online. 3. Naked Statistics: Stripping the Dread from the Data This is truly the best book on statistics as it introduces the information in a rather simple shape. Most people came to think that work with numbers is an incredibly difficult thing but this piece of writing is devoted to breaking the stereotype as it shows the statistics science in a totally different color. The author describes effective tools for the organization of the process and which of them is more appropriate for a peculiar situation. After reading the book that was written in such a friendly manner you will get rid of all the fears about statistics. 4. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data It is about R programming language and its capability to transform that pointless number of figures in the source of knowledge and inspiration. The authors describe the fundamental R working packages which would be efficient for data organization and make the scientific process run faster and smoother. Learn to design data in the fastest way with the R language. 5. Introduction to Linear Algebra This one of the highly appreciated books which presents Linear Algebra in the easiest shape. It is written in a manner that slowly leads the reader to progress and helps to understand the topic better. With this book, a data scientist will either acquire or improve the knowledge of vectors, eigenvalues, equations and other elements of linear algebra that are richly used in machine learning. 6. Introduction of Math of Neural Networks Neural networks are the future of data science. This book will awaken or warm up your interest in the subject. The writer has a wonderful capability to simplify the topic that seems hard to apprehend. The author introduces a vast spectrum of means which are applied for the development of neural networks of all types. 7. Advanced Engineering Mathematics This is a renowned book in the field of machine learning. The book would be equally effective both for pros and for college students who face some difficulties with the subject. It is dedicated to developing new math skills and understanding of all the basic branches of mathematics science. 8. Elements of Statistical Learning It is devoted to people experienced in machine learning and who wish to get some extra knowledge in the field. The book offers the algorithms of a higher class and difficulty including neural networks and kernel methods. It is also filled with an abundance of examples for a better understanding. Hopefully, you will find this information helpful for lean manufacturing, income calculation, prediction of various events and other fields where data science participates.
<urn:uuid:795a7007-c534-4c94-a0c5-e8465c05200b>
CC-MAIN-2022-40
https://www.crayondata.com/8-must-read-books-on-statistics-mathematics-for-data-science-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00514.warc.gz
en
0.940863
983
2.90625
3